The prevalence of large language models (LLMs) such as ChatGPT has wowed the world with its ability to generate text in a human-like manner. While educators evaluate how AI will impact the future of learning, we identify mistakes ChatGPT has made. We further extend this concern to nonfinancially sophisticated users seeking to improve their financial literacy who may not possess the financial acumen to determine when the AI is hallucinating. Using a longitudinal study, our analysis frames the prompts and subsequent findings within the four stages of the Dunning-Kruger effect to explore how users of varying expertise receive output from the LLMs. We find that ChatGPT cannot always fully distinguish between three different user groups. Our findings have important implications for accountants, educators, and students using LLMs as a tool in work and education and for the general population looking to bypass financial experts for their personal finance needs. Data Availability: Data will be made available upon request. JEL Classifications: M41.