
Summary: OpenAI Codex for Mac has added Chronicle, a research preview feature that periodically captures screenshots, sends them to OpenAI servers for processing, and stores text summaries as unencrypted local Markdown files to provide the AI assistant with passive context about user activity. The feature is not available in the EU, UK, and Switzerland, requires a $100+/month Pro subscription and Apple Silicon, and represents OpenAI’s first implementation of AI with ambient screen recognition on desktop, choosing cloud processing and utility over the on-premise privacy architecture adopted by competitors like Screenpipe and the now-defunct Rewind AI.
OpenAI’s Codex desktop app for Mac has gained a feature called Chronicle that periodically captures your screen, processes the content into text summaries, and stores those summaries as local memory files that give the AI assistant context about what you’ve been working on. The feature, released as a research preview, means Codex can now understand your recent activity without you having to explain it. It also means that OpenAI is sending screenshots of your desktop to its servers for processing, a design choice that puts Chronicle in direct tension with the privacy-first direction that much of the industry has been moving towards.
Chronicle is part of a broader update that transformed Codex from a coding assistant to a general-purpose AI workspace. The April 16 release, titled “Codex for (almost) everything,” added desktop capabilities that allow Codex to operate Mac applications with its own cursor, an in-app browser, image generation, persistent memory, and more than 90 plugins. More than one million developers have used Codex and its usage doubled after the release of the GPT-5.2-Codex model in December.
How the chronicle works
Chronicle runs agents in the background that periodically capture screenshots of your screen. Those screenshots are sent to OpenAI servers, where they are processed using OCR and visual analysis to generate text summaries. The summaries are saved as Markdown files in a local directory on ~/.codex/memories_extensions/chronicle/. When you later request Codex, those memory files are included in your context window, allowing you to understand what apps you were using, what documents you were reading, what code you were writing, and what conversations you were having, all without you repeating any of it.
Raw screenshots are temporarily stored in a temporary system directory and are automatically deleted after six hours. OpenAI states that screenshots are not stored on their servers after processing and are not used for training. The memories generated, however, persist indefinitely as unencrypted text files on your machine.
Greg Brockman, president of OpenAI, described the feature as “an experimental feature that gives Codex the ability to see and remember what you see, automatically giving it the full context of what you’re doing. It’s surprisingly magical to use.”
The architecture of privacy
Chronicle requires macOS accessibility and screen recording permissions. It’s available only on Apple Silicon Macs running macOS 14 or later, and only for ChatGPT Pro subscribers who pay $100 or more per month. It is not available in the EU, UK or Switzerland, a geographic restriction that strongly suggests that OpenAI recognizes the feature’s incompatibility with GDPR requirements for data minimization and purpose limitation.
The comparison with Microsoft Recall is instructive. Recall, which launched on PCs running Windows Copilot+, takes screenshots every few seconds and stores them in an encrypted local database, with all processing handled by a neural processing unit on the device. No screenshot data comes out of the machine. Chronicle takes the opposite approach: processing is done in the cloud, but only text summaries are retained locally. Recall encrypts your database and requires biometric authentication through Windows Hello. Chronicle stores your memories as unencrypted Markdown files that can be accessed by any process running on the computer.
OpenAI’s own documentation explicitly acknowledges the risks. Chronicle “increases the risk of rapid injection” because malicious content from a website you visit could be captured in a screenshot and interpreted as instructions by the AI. The memory directory “could contain confidential information.” And the feature “uses rate limits quickly,” meaning Pro subscribers may find their Codex usage limited by Chronicle’s background activity.
OpenAI recommends pausing Chronicle before meetings or when viewing sensitive content. Users can pause and resume via the Codex menu bar icon. The recommendation itself is telling: it recognizes that the feature will capture things it shouldn’t and shifts the burden of managing that risk to the user.
The category and its losses
AI assistants with screen recognition have had a turbulent history. Rewind AI, the most prominent early entrant, changed its name to Limitless before being acquired by Meta in December 2025. The Mac app was closed and screenshots were disabled. The Microsoft Copilot has lost 39% of its subscribers in six months, partly due to trust issues that extend to Recall. A security researcher demonstrated in early 2026 that Recall’s encrypted database could still be exploited, reinforcing concerns that had dogged the feature since its announcement.
The open source Screenpipe alternative offers a local-first approach: continuous screen and audio capture processed entirely on the device, with a $400 lifetime license and no recurring cloud dependency. The Perplexity Personal Computer The software takes another approach, turning a Mac mini into a persistent AI agent with access to local files and applications, although it also relies on cloud processing for its core intelligence.
The pattern across the category is consistent: The more useful an AI with screen recognition becomes, the more data it needs to process, and the harder it becomes to reconcile that data appetite with user privacy. Chronicle opts for utility over privacy architecture, betting that OpenAI’s promise of not storing or training data, combined with the six-hour deletion window, is enough to earn user trust. Whether that gamble pays off depends entirely on whether users believe in the promise and whether OpenAI can keep it as the feature grows.
The ambient computing context
Chronicle comes as the industry converges on the idea that AI assistants need to understand their context without being told. Apple is testing smart glasses with AI Designed as ambient input channels for Apple Intelligence. Slack’s recent AI overhaul turned Slackbot into a desktop agent with deep context about your work communications. OpenAI itself is developing a screenless hardware device with Jony Ive that is explicitly positioned for an era of “ambient AI.” Gartner predicts that more than 40% of large companies will implement ambient intelligence pilots by 2026.
The thesis is that AI becomes dramatically more useful when it has passive, continuous access to what you’re doing rather than requiring you to articulate your needs from scratch each time. Chronicle is OpenAI’s first implementation of that thesis on the desktop, and it works: By Brockman’s account and the feature’s design, eliminating the need to re-explain context to an AI assistant is a real productivity win.
But the thesis has a cost. Alternatives that prioritize privacy As Proton’s AI tools demonstrate, useful AI can run locally on open source models without sending user data to anyone’s servers. The question Chronicle raises is not whether AI with screen recognition is useful. Clearly it is. The question is whether the cloud-processed, trust-dependent model that OpenAI has chosen will survive contact with the regulatory environment that has already excluded it from three jurisdictions, and with users who have seen enough AI companies promise data privacy only to quietly revise their terms when the economics demanded it.





