ChatGPT, like many chatbots, is pitched as a hyper-competent personal assistant. But among the many things that confuse it, one is particularly confounding: It cannot tell time.
When I ask ChatGPT what time it is, I’m never quite sure what I’ll get. Sometimes, it tells me it can’t do it. “I don’t have access to your device’s real-time clock or your location, so I can’t tell the exact local time for you,” it wrote to me at 4:15PM Eastern Standard Time about a week ago. “But I do know today’s date according to my system: 2025-11-20.” (Bolded by ChatGPT, I assume, to make sure I didn’t overlook the things it was doing well.) Sometimes it asks me to specify a city or time zone, only to reveal it can’t reliably check time that way either — “It’s 12:42 PM in New York (Eastern Time, assuming your system clock is correct),” ChatGPT wrote to me at 11:08AM. And sometimes it does provide exactly the correct time, until I ask a couple of minutes later, and it gets it wrong again.
We aren’t the first to bring it up. The problem of time comes up frequently on Reddit and ChatGPT’s forums. One user urged OpenAI to “pay attention to this” because it gives “a bad name” to the AI model “with cognitive abilities far superior than my own.” Features like web search have offered some work-arounds. But years after launch, vanilla ChatGPT remains blissfully indifferent to the ticking of the clock — and as absurd as the situation might seem, there’s a simple reason for that.
Telling time is trivial for any computer and phone thanks to the tiny chips ticking away inside them. But generative AI systems like the large language and visual models powering ChatGPT, Google’s Gemini, Anthropic’s Claude, and others are constructed for a very different purpose. By default, they take in user queries and predict answers based purely on their training data. This doesn’t include constant, real-time updates about things like time, unless they specifically search the internet for that information.
“A language model works in its own space of language and words. It is only referencing things that have entered this space,” said AI robotics expert Yervant Kulbashian, who wrote about the concept of time as perceived by AI in 2024, to The Verge. It’s like a castaway on an island in the middle of the ocean, stocked with a massive collection of books but no watch.
Why can’t OpenAI just build a bridge to that island and give ChatGPT access to a system time clock? The short answer is, it can. As I chatted with Pasquale Minervini, who researches natural language processing in the School of Informatics at the University of Edinburgh in Scotland, his desktop ChatGPT app immediately gave him the correct time in Milan, Italy — where he was located during our interview. “It’s able to tell the time if you give it access to a clock. Otherwise, it’s something that was just born in that moment, in a way,” he said. The timely information was likely “embedded in the context” of the app, he said. He had previously enabled the “Search” function on the ChatGPT app, which means that ChatGPT has permission to tap into his computer’s built-in time tools, in addition to the web, to get the time.
OpenAI told us as much. “The models powering ChatGPT don’t have built-in access to the current time, so for up-to-date facts ChatGPT sometimes needs to call search to pull in the latest information,” spokesperson Taya Christianson wrote to The Verge.
There are tradeoffs to keeping LLMs aware of the time, Kulbashian said. ChatGPT has finite space in what’s called its context window, or the portion of information “remembered” at any given time. Every time ChatGPT consults a system clock, it adds a piece of information to that context window — to use another metaphor, imagine somebody stacking a stopped clock on a desk every second. “If you start adding more things onto your desk, you have to eventually start pushing things off,” Kulbashian said.
Updated sufficiently often, clock clutter could amount to simply noise for the AI system. “You might end up kind of confusing the robot,” Kulbashian said. “If we’re having a conversation, and then somebody, every so often, was popping in and saying, ‘It’s 5:45.’ ‘It’s 5:46 now.’” By contrast, something like the date is relatively easy to include in a system prompt at the start of a chat — which one apparent ChatGPT system prompt leak seems to show.
ChatGPT users can tell the time without too much fuss by asking the chatbot specifically to search for it. (Some other chatbots, like Google Gemini, will automatically search for the time.) You can also use an open-source model context protocol to connect an AI application to your data. That said, sending AI models to search the web or letting them access personal data comes with risks, like the bot being injected with malicious prompts that are scattered across the internet, Minervini said.
Minervini, who finds blind spots in consumer AI technology as part of his research, says there’s actually a whole list of time-related tasks it hasn’t mastered. He’s prompted leading AI models with pictures of analog clocks and found that models struggle to read the positions of the two clock arms. Calendars, he told me, are “also weird.”
Perhaps the bigger issue, for the average user, is that ChatGPT can’t reliably make clear what its limitations are. A human assistant who simply doesn’t know the time might be understandable; one that regularly lies about knowing it would probably get fired. But, of course, large language models aren’t lying — they’re just predicting, as usual, what you want to hear.
OpenAI’s Christianson said, “we’re continuing to improve how consistently it knows when to do so.”


