Is AI only to Blame? Assessing Teachers’ Perceived Challenges in AI Detectability
DOI:
https://doi.org/10.37134/ajatel.vol15.2.2.2025Keywords:
AI Text Detection, Teacher Perception, Academic Integrity, Mixed Method Research, Educational PolicyAbstract
Existing research on teachers’ ability to detect AI-generated texts has predominantly emphasized technical shortcomings, overlooking the behavioral and environmental factors that shape detection accuracy. As generative AI becomes embedded in education, understanding how institutional and personal contexts influence teachers’ detection performance is crucial for ensuring academic integrity. This study aims to identify and analyze the key internal (behavioral) and external (institutional and contextual) factors affecting teachers’ ability to distinguish AI-generated from human-written texts. It further seeks to examine how these factors interact across global regions to develop a more comprehensive framework for understanding detection challenges. An exploratory sequential mixed-method design was employed. The first phase involved 15 key informant interviews with educators from three continents to identify salient determinants of detection capability. Insights from this phase guided the development of a survey administered to 317 teachers across four continents. Data was analyzed using Structural Equation Modeling (SEM) to test interrelationships among identified factors. Findings revealed that rigid university policies significantly hinder teachers’ detection ability, especially in Europe, both directly and indirectly through time limitations and content indistinguishability. By integrating behavioral and contextual dimensions, the study advances beyond technically centered perspectives and proposes a global framework for understanding AI detectability. The results have theoretical and practical implications for policymakers and AI developers. Limitations include reliance on perception-based data and lack of African representation, warranting broader, experimental validation in future research.
Downloads
References
Abdullah, M., Madain, A., & Jararweh, Y. (2022). ChatGPT: Fundamentals, Applications and Social Impacts. 2022 Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS), 1–8. https://api.semanticscholar.org/CorpusID:257537378
Ahmed, I., Kajol, M. A., Hasan, U., & Datta, P. P. (2023). ChatGPT vs. Bard: A Comparative Study. https://doi.org/10.36227/techrxiv.23536290
AlAfnan, M. A., Samira Dishari, Marina Jovic, & Koba Lomidze. (2023). ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses. Journal of Artificial Intelligence and Technology, 3(2 SE-Research Articles), 60–68. https://doi.org/10.37965/jait.2023.0184
Albers, M. J. (2017). Introduction to Quantitative Data Analysis in the Behavioral and Social Sciences. Wiley. https://books.google.com.bd/books?id=l5MZDgAAQBAJ
Alves de Castro, C. (2023). A Discussion about the Impact of ChatGPT in Education: Benefits and Concerns. Journal of Business Theory and Practice, 11(2), 28–34. https://doi.org/10.22158/jbtp.v11n2p28
Andrews, E. (2023). Comparison of different programs that claim to detect AI-generated text. Colorado State University. https://tilt.colostate.edu/comparing-ai-detection-tools-one-instructors-experience/
Atlas, S. (2023). ChatGPT for Higher Education and Professional Development: A Guide to Conventional AI. DigitalCommons@URI. https://digitalcommons.uri.edu/cba_facpubs/548/?utm_source=digitalcommons.uri.edu%2Fcba_facpubs%2F548&utm_medium=PDF&utm_campaign=PDFCoverPages
Authors (2024a). Published Article [Details Removed for Anonymity]
Authors (2024b). Published Article [Details Removed for Anonymity]
Authors (2024c). Published Article [Details Removed for Anonymity]
Aw, B. (2024). 12 Best AI Detectors in 2024: From Over 180 Tests. Brandan Aw. https://brendanaw.com/best-ai-detector
Baron, N. S. (2023). How ChatGPT robs students of motivation to write and think for themselves. The Conversation. https://theconversation.com/how-chatgpt-robs-students-of-motivation-to-write-and-think-for-themselves-197875
Barton, R. (2024). Turnitin adding AI writing detection, but instructors should use it with caution. Purdue University. https://www.purdue.edu/online/turnitin-adding-ai-writing-detection-but-instructors-should-use-it-with-caution/
Basu, B. (2023). ChatGPT and its impact on education sector. Daily Sun. https://www.daily-sun.com/printversion/details/673514/ChatGPT-and-its-impact-on-education-sector
Biener, C., & Waeber, A. (2024). Would I lie to you? How interaction with chatbots induces dishonesty. Journal of Behavioral and Experimental Economics, 102279. https://doi.org/https://doi.org/10.1016/j.socec.2024.102279
Biswas, S. S. (2023a). Potential use of chat gpt in global warming. Annals of Biomedical Engineering, 51(6), 1126–1127.
Biswas, S. S. (2023b). Role of chat gpt in public health. Annals of Biomedical Engineering, 51(5), 868–869.
Black, J. (2024). Can AI Lie? Chabot Technologies, the Subject, and the Importance of Lying. Social Science Computer Review, 08944393241282602.
Bocian, Z. (2024). Key Chatbot Statistics You Should Follow in 2024. Chatbot. https://www.chatbot.com/blog/chatbot-statistics/
Bove, T. (2023). Iconic leftist intellectual Noam Chomsky says chatbots are ‘marvels of machine learning’ but also the banality of evil, rebooted. Fortune. https://fortune.com/2023/03/08/noam-chomsky-ai-chatgpt-are-not-true-intelligence/
Bryan, C. (2024). Teachers’ Perceptions of Public Policy and their Impact on Teacher Retention. The Interactive Journal of Global Leadership and Learning, 3(2), 1.
Carlson, J. R., George, J. F., Burgoon, J. K., Adkins, M., & White, C. H. (2004). Deception in Computer-Mediated Communication. Group Decision and Negotiation, 13(1), 5–28. https://doi.org/10.1023/B:GRUP.0000011942.31158.d8
Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. Journal of Medical Systems, 47(1), 33. https://doi.org/10.1007/s10916-023-01925-4
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66(4), 616–630.
Chaka, C. (2023). Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. Journal of Applied Learning and Teaching, 6(2).
Chaka, C. (2024). Reviewing the performance of AI detection tools in differentiating between AI-generated and human-written texts: A literature and integrative hybrid review. Journal of Applied Learning and Teaching, 7(2).
Chan, A. (2023). GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry. AI and Ethics, 3(1), 53–64. https://doi.org/10.1007/s43681-022-00148-6
Cohen, J. (2013). Statistical power analysis for the behavioral sciences. Academic press.
Coleman, T. (2023). 2023: the year of the AI boom. The Week. https://theweek.com/tech/2023-ai-boom
Creagh, S., Thompson, G., Mockler, N., Stacey, M., & Hogan, A. (2023). Workload, work intensification and time poverty for teachers and school leaders: a systematic research synthesis. Educational Review, 1–20. https://doi.org/10.1080/00131911.2023.2196607
Cristianini, N. (2016). Intelligence Reinvented. New Scientist, 232(3097), 37–41. https://doi.org/10.1016/S0262-4079(16)31992-3
Cuban, L. (1984). Policy and Research Dilemmas in the Teaching of Reasoning: Unplanned Designs. Review of Educational Research, 54(4), 655–681. https://doi.org/10.3102/00346543054004655
Daft, R., & Lengel, R. (1986). Organizational Information Requirements, Media Richness and Structural Design. Management Science, 32, 554–571. https://doi.org/10.1287/mnsc.32.5.554
Day, T. (2023). A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT. The Professional Geographer, 75(6), 1024–1027.
De Wilde, V. (2024). Can novice teachers detect AI-generated texts in EFL writing? ELT Journal, 78(4), 414–422. https://doi.org/10.1093/elt/ccae031
Dilmegani, C. (2024). ChatGPT Education Use Cases, Benefits & Challenges in 2024. AIMultiple. https://research.aimultiple.com/chatgpt-education/
Dowling, M., & Lucey, B. (2023). ChatGPT for (Finance) research: The Bananarama Conjecture. Finance Research Letters, 53, 103662. https://doi.org/https://doi.org/10.1016/j.frl.2023.103662
Duah, J. E., & McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. The International Journal of Information and Learning Technology, 41(2), 180–193. https://doi.org/10.1108/IJILT-11-2023-0213
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data – evolution, challenges and research agenda. International Journal of Information Management, 48, 63–71. https://doi.org/10.1016/J.IJINFOMGT.2019.01.021
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/J.IJINFOMGT.2023.102642
Edwards, B. (2023). OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks. Arstechnica. https://arstechnica.com/information-technology/2023/03/openai-announces-gpt-4-its-next-generation-ai-language-model/
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. https://doi.org/https://doi.org/10.1016/j.jrt.2023.100060
Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19(1), 17. https://doi.org/10.1007/s40979-023-00140-5
Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., & McGrath, C. (2024). Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers’ assessment practices. Assessment & Evaluation in Higher Education, 49(3), 363–375. https://doi.org/10.1080/02602938.2023.2241676
Field, H. (2024). OpenAI launches new AI model GPT-4o and desktop version of ChatGPT. CNBC. https://www.cnbc.com/2024/05/13/openai-launches-new-ai-model-and-desktop-version-of-chatgpt.html
Finnie-Ansley, J., Denny, P., Becker, B. A., Luxton-Reilly, A., & Prather, J. (2022). The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. Proceedings of the 24th Australasian Computing Education Conference, 10–19. https://doi.org/10.1145/3511861.3511863
Fitz, J. A., & Nikolaidis, A. C. (2020). A democratic critique of scripted curriculum. Journal of Curriculum Studies, 52(2), 195–213. https://doi.org/10.1080/00220272.2019.1661524
Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6, 100209.
Freeman, B. (2013). Digital deception. Places Journal.
Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. https://doi.org/10.1080/15228053.2023.2233814
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. Npj Digital Medicine, 6(1), 75. https://doi.org/10.1038/s41746-023-00819-6
Grant, N. (2023). Google Builds on Tech’s Latest Craze With Its Own A.I. Products. The New York Times. https://www.nytimes.com/2023/05/10/technology/google-ai-products.html
Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to Fake It: Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions. Mayo Clinic Proceedings: Digital Health, 1(3), 226–234. https://doi.org/https://doi.org/10.1016/j.mcpdig.2023.05.004
Grazioli, S., & Jarvenpaa, S. (2003a). Consumer and Business Deception on the Internet: Content Analysis of Documentary Evidence. International Journal of Electronic Commerce, 7, 93–118.
Grazioli, S., & Jarvenpaa, S. (2003b). Deceived: Under target Online. Commun. ACM, 46, 196–205. https://doi.org/10.1145/953460.953500
Guan, C., Wang, X., Zhang, Q., Chen, R., He, D., & Xie, X. (2019). Towards a Deep and Unified Understanding of Deep Neural Models in NLP. 36th International Conference on Machine Learning, 2454–2463. https://proceedings.mlr.press/v97/guan19a.html
Gupta, A. (2020). Introduction to AI Chatbots. International Journal of Engineering Research And, 9. https://doi.org/10.17577/IJERTV9IS070143
Hancock, J. T. (2007). Digital deception. Oxford Handbook of Internet Psychology, 61(5), 289–301.
Hobert, S., & von Wolff, R. M. (2019). Say Hello to Your New Automated Tutor - A Structured Literature Review on Pedagogical Conversational Agents. Wirtschaftsinformatik. https://api.semanticscholar.org/CorpusID:201114924
Hua, H.-U., Kaakour, A.-H., Rachitskaya, A., Srivastava, S., Sharma, S., & Mammo, D. A. (2023). Evaluation and comparison of ophthalmic scientific abstracts and references by current artificial intelligence chatbots. JAMA Ophthalmology, 141(9), 819–824.
Hughes, A. (2023). ChatGPT: Everything you need to know about OpenAI’s GPT-4 upgrade. BBC Science Focus. https://www.sciencefocus.com/future-technology/gpt-3/
Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., & Baghdadi, R. (2023). Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Scientific Reports, 13(1), 12187.
Ivanov, V. (2023). Which Is The Best AI Content Detector? [13 Tools Tested]. Trickmenot.Ai. https://trickmenot.ai/which-is-the-best-ai-content-detector/
Kappel, E. S. (2023). How Might Artificial Intelligence Affect Scientific Publishing? Oceanography, 36(1). https://doi.org/10.5670/OCEANOG.2023.113
Kersting, K. (2020). Rethinking Computer Science Through AI. KI - Künstliche Intelligenz, 34(4), 435–437. https://doi.org/10.1007/s13218-020-00692-5
Kumar, R., & Mindzak, M. (2024). Who wrote this? Detecting artificial intelligence–generated text from human-written text. Canadian Perspectives on Academic Integrity, 7(1).
Kushwaha, A. K., & Kar, A. K. (2021). MarkBot – A Language Model-Driven Chatbot for Interactive Marketing in Post-Modern World. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10184-y
Kwok, A. (2021). Flexible or Rigid? Exploring Preservice Teachers’ Classroom Preferences. Educational Researcher, 50(7), 463–473. https://doi.org/10.3102/0013189X211011426
Lambert, J., & Stevens, M. (2023). ChatGPT and Generative AI Technology: A Mixed Bag of Concerns and New Opportunities. Computers in the Schools, 1–25. https://doi.org/10.1080/07380569.2023.2256710
Lameras, P., & Arnab, S. (2021). Power to the teachers: an exploratory review on artificial intelligence in education. Information, 13(1), 14.
Lawlor, P., & Chang, J. (2024). The rise of generative AI: A timeline of breakthrough innovations. Qualcomm. https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations
Leah, H., & Meroño-Peñuela, A. (2022). The Hermeneutics of Computer-Generated Texts. Configurations, 30(2), 115–139. https://doi.org/10.1353/con.2022.0008
Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. Journal of Medical Internet Research, 25, e46924.
Malin, C. H., Gudaitis, T., Holt, J. H., & Kilkger, M. (2017). Deception in the Digital Age. Elsevier.
Malufu, K., Malufu, S., & Dlamini, C. (2024). Academic integrity in digital learning. In Teaching Information Systems (pp. 193–218). Edward Elgar Publishing.
Mardiansyah, K., & Surya, W. (2024). Comparative Analysis of ChatGPT-4 and Google Gemini for Spam Detection on the SpamAssassin Public Mail Corpus. https://doi.org/10.21203/rs.3.rs-4005702/v1
Mathew, A. (2023). Is artificial intelligence a world changer? A case study of OpenAI’s Chat GPT.
McConnell, T. R., & Fry, M. A. (1972). Flexibility or rigidity: university attitudes towards the James Report. Higher Education Review, 4(3), 13.
Metwally, A. B. M., Ali, S. A. M., & Mohamed, A. T. I. (2024). Thinking Responsibly About Responsible AI in Risk Management: The Darkside of AI in RM. 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), 1–5. https://api.semanticscholar.org/CorpusID:268548220
Milmo, D. (2023). Google poised to release chatbot technology after ChatGPT success. The Guardian. https://www.theguardian.com/technology/2023/feb/03/google-poised-to-release-chatbot-technology-after-chatgpt-success?we-love-chatsing-ai
Mok, A. (2022). Google’s management has reportedly issued a ‘code red’ amid the rising popularity of the ChatGPT AI. Business Insider. https://www.businessinsider.nl/googles-management-has-reportedly-issued-a-code-red-amid-the-rising-popularity-of-the-chatgpt-ai/
Mupa, P., & Chinooneka, T. I. (2015). Factors Contributing to Ineffective Teaching and Learning in Primary Schools: Why Are Schools in Decadence?. Journal of Education and Practice, 6(19), 125–132. https://api.semanticscholar.org/CorpusID:154194058
Murray, N., & Tersigni, E. (2024). Can Instructors Detect Ai-Generated Papers? Postsecondary Writing Instructor Knowledge and Perceptions of Ai. Journal of Applied Learning & Teaching, 7(2), 1–13. https://doi.org/10.37074/jalt.2024.7.2.12
Natale, S. (2023). AI, human-machine communication and deception. The SAGE Handbook of Human-Machine Communication, 401–408.
Noroozi, O., Soleimani, S., Farrokhnia, M., & Banihashem, S. K. (2024). Generative AI in Education: Pedagogical, Theoretical, and Methodological Perspectives. International Journal of Technology in Education (IJTE), 7(3), 373–385. https://doi.org/https://doi.org/10.46328/ijte.845
Olson, P. (2022). Google Faces a Serious Threat From ChatGPT. The Washington Post. https://www.washingtonpost.com/business/energy/google-faces-a-serious-threat-from-chatgpt/2022/12/07/363d2440-75f5-11ed-a199-927b334b939f_story.html
OpenAI. (2022). Introducing ChatGPT. Openai.Com. https://openai.com/index/chatgpt/
Ortiz, S. (2023). What is Google Bard? Here’s everything you need to know. ZDnet. https://www.zdnet.com/article/what-is-google-bard-heres-everything-you-need-to-know/
Pérez-Marín, D. (2021). A Review of the Practical Applications of Pedagogic Conversational Agents to Be Used in School and University Classrooms. Digital. https://api.semanticscholar.org/CorpusID:234079366
Pichai, S. (2023). An important next step on our AI journey. Google Blog. https://blog.google/technology/ai/bard-google-ai-search-updates/
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22. https://doi.org/10.1186/s41039-017-0062-8
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. https://doi.org/https://doi.org/10.1016/j.iotcps.2023.04.003
Rogers, R. (2024). With OpenAI’s Release of GPT-4o, Is ChatGPT Plus Still Worth It? Wired. https://www.wired.com/story/with-gpt-4o-is-chatgpt-plus-still-worth-it/
Sadka, A. (2024). What to expect from the next generation of chatbots: OpenAI’s GPT-5 and Meta’s Llama-3. The Conversation. https://theconversation.com/what-to-expect-from-the-next-generation-of-chatbots-openais-gpt-5-and-metas-llama-3-228217
Saunders, M. N. K., & Townsend, K. (2016). Reporting and justifying the number of interview participants in organization and workplace research. British Journal of Management, 27(4), 836–852.
Schmitt, M., & Flechais, I. (2024). Digital Deception: Generative artificial intelligence in social engineering and phishing. Artificial Intelligence Review, 57(12), 1–23.
Sier, J. (2022). Search engine AI ChatGPT takes the internet by storm, bad poetry and all. Financial Review. https://www.afr.com/technology/chatgpt-takes-the-internet-by-storm-bad-poetry-and-all-20221207-p5c4hv
Silva, M. (2022). Addressing cyber deception and abuse from a human factors perspective. University of Florida.
Simuţ, R., Simuţ, C., Badulescu, D., & Badulescu, A. (2024). ARTIFICIAL INTELLIGENCE AND THE MODELLING OF TEACHERS’COMPETENCIES. Amfiteatru Economic, 26(65), 181–200.
Singh, V., & Ram, S. (2024). Impact of Artificial Intelligence on Teacher Education. Shodh Sari-An Internafional Mulfidisciplinary Journal.
Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3, 100075.
The PyCoach. (2024). Gemini Ultra vs GPT-4: Did Google Beat GPT-4 This Time? Medium. https://medium.com/artificial-corner/gemini-ultra-vs-gpt-4-did-google-beat-gpt-4-this-time-b3e8446773b9
Turing, A. M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230–265. https://doi.org/https://doi.org/10.1112/plms/s2-42.1.230
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. http://www.jstor.org/stable/2251299
Van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7
Walters, W. H. (2023). The Effectiveness of Software Designed to Detect AI-Generated Writing: A Comparison of 16 AI Text Detectors. Open Information Science, 7(1). https://doi.org/doi:10.1515/opis-2022-0158
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26.
Weir, K. (2022). The science behind creativity. Monitor on Psychology, 53(3), 40. https://www.apa.org/monitor/2022/04/cover-science-creativity
Williamson, B. (2024). The Social life of AI in Education. International Journal of Artificial Intelligence in Education, 34(1), 97–104. https://doi.org/10.1007/s40593-023-00342-5
Williamson, S. M., & Prybutok, V. (2024). The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of Misinformation. Information, 15(6), 299.
Witte, M. De. (2023). How will ChatGPT change the way we think and work? Stanford News. https://news.stanford.edu/2023/02/13/will-chatgpt-change-way-think-work/
Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, M., & Drachsler, H. (2021). Are We There Yet? - A Systematic Literature Review on Chatbots in Education. Frontiers in Artificial Intelligence, 4, 654924. https://doi.org/10.3389/frai.2021.654924
Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. https://doi.org/10.3389/fpsyg.2023.1181712
Zemčík, T. (2019). A Brief History of Chatbots. DEStech Transactions on Computer Science and Engineering. https://doi.org/10.12783/dtcse/aicae2019/31439
Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., & He, L. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. ArXiv Preprint ArXiv:2302.09419.
Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D. (2004). Automating Linguistics-Based Cues for Detecting Deception in Text-Based Asynchronous Computer-Mediated Communications. Group Decision and Negotiation, 13(1), 81–106. https://doi.org/10.1023/B:GRUP.0000011944.62889.6f
Zhou, L., Burgoon, J., Twitchell, D., Qin, T., & Jr, J. (2004). A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication. J. of Management Information Systems, 20, 139–165. https://doi.org/10.1080/07421222.2004.11045779
Zhou, L., & Zhang, D. (2004). Can online behavior unveil deceivers? - an exploratory investigation of deception in instant messaging. 37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of The, 9 pp. https://doi.org/10.1109/HICSS.2004.1265079
Zielinski, C., Winker, M. A., Aggarwal, R., Ferris, L. E., Heinemann, M., Lapeña, J. F. J., Pai, S. A., Ing, E., Citrome, L., Alam, M., Voight, M., & Habibzadeh, F. (2023). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. In Colombia medica (Cali, Colombia) (Vol. 54, Issue 3, p. e1015868). https://doi.org/10.25100/cm.v54i3.5868
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Ahnaf Chowdhury Niloy, Tazreen Huda

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


