Limitations
AI tools have serious limitations that are easily obscured by overconfident language, both from stakeholders (such as the companies that make the tools) and the output of the tool itself. Awareness of these limitations will allow you to better evaluate whether to use AI, which tool to use, and whether to trust its output.
Common Sense, Originality, and Ethical Reasoning
AI can make decisions and generate content from provided data and previously learned "rules." It cannot match human powers of creativity or originality. Nor can AI truly ethically evaluate its own content or make ethically-informed decisions about what to do with its output. Moreover, some argue that AI use actively stifles humans' creative capacity by outsourcing critical thought.
Hallucinations
"Hallucinations" refer to AI-generated content that is false or inaccurate. Examples include sources that do not exist, links that lead nowhere, or untrue facts. When asked for a list of references, an LLM generates a plausible list of what sources wouldlook like on that topic, not necessarily what sources actually exist on that topic. LLMs are prediction machines. They cannot fact-check or "understand" the content they are generating.
Some AI tools do link to real sources for research purposes. However, even for these tools, there are limitations in their available content (see "Source Accessibility" below).
Resources:
Bias
Bias in AI training and output can have serious consequences, especially if users accept results without critical examination or assume that "machine"-generated content must be neutral. Factors that are vulnerable to bias include training data (resulting in models that "learn" harmful patterns and associations) and human guidance for AI models.
Resources: