Learning Materials | Limitations in Academia and Beyond
This section addresses a critical limitation of AI technologies: hallucinations and the generation of false information. The article from Scientific American explores why AI chatbots struggle with accuracy, while the Marketplace Tech podcast discusses the issue of AI generating fake citations. These resources highlight the importance of critically evaluating AI-generated content and refining our interactions with these tools to ensure reliability in academic and professional contexts.
AI Chatbots Will Never Stop Hallucinating
Click this link to read the article online.
Citation:
Leffer, L. (2024, April 5). AI chatbots will never stop hallucinating. Scientific American. https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/
Don’t be surprised by AI chatbots creating fake citations
Click here to listen to this podcast online.
Citation:
McCarty Carino, M., & Shin, D. (2023, April 13). Don’t be surprised by AI chatbots creating fake citations [Audio podcast episode]. In Marketplace Tech. Marketplace. https://www.marketplace.org/shows/marketplace-tech/dont-be-surprised-by-ai-chatbots-creating-fake-citations