
Secretive partnership between Jony Ive and OpenAI to build a screenless AI device hits delays amid compute scarcity and design dilemmas.
Reports reveal the team grapples with how the device should “feel” and whether it can run powerful AI models locally without data centers.
The project’s stumbles expose broader struggles to balance privacy, utility and sustainability in the next generation of personal AI hardware.
A year ago, news that Apple design legend Jony Ive was teaming up with OpenAI to create a radical new AI device sent shockwaves through the tech industry. The device, rumored to be a palm‑sized, screenless assistant, would usher in “the next computing paradigm” by combining Ive’s minimalist hardware aesthetic with OpenAI’s conversational AI. Now, reports suggest the project is stalled. “AI device” appears up front because the story centers on whether the much‑anticipated gadget can overcome engineering, design and ethical hurdles. According to sources, the team has not agreed on the device’s personality, privacy features or on‑device compute strategy; compute scarcity also hampers progress.
Why this matters
The partnership between Ive and OpenAI promised to define a post‑smartphone era. The device would merge AI into everyday life, not through a screen but through natural conversation and contextual awareness. Its success could determine whether we carry general‑purpose AI assistants like we carry smartphones. But building such a device is hard: AI models require immense computing power, and running them locally on a small gadget means either using smaller models (with limited capabilities) or relying on cloud servers (raising privacy concerns). The delays highlight the tension between design ideals and technical realities.
Chronology of the AI device journey
September 2024 – Initial reveal. The New York Times reported that Jony Ive and Sam Altman were prototyping a screenless AI gadget. The device would recognize faces, answer questions and perform tasks through voice and gestures. Ive’s team, which included engineers from his LoveFrom design studio, was inspired by the iPod and Apple Watch.
October 2024 – OpenAI acquires LoveFrom hardware division. OpenAI reportedly paid $6.4 billion to fold Ive’s team into the company. Sarah Friar, then OpenAI’s CFO, said the acquisition aimed to create “a new substrate for computing,” likening it to the transition from flip phones to smartphones.
March 2025 – Concept leaks. Sketches leaked showing a pebble‑shaped device with cameras, a speaker and no traditional screen. Enthusiasts dubbed it the “Pebble AI.”
June 2025 – Engineering hiccups. Reports suggested that the team struggled to miniaturize the hardware while accommodating cooling, sensors and a battery. Compute scarcity became acute: OpenAI’s data centers were already strained by ChatGPT demand, and acquiring more GPUs proved difficult due to global chip shortages.
October 2025 – Delays confirmed. PYMNTS and other outlets reported that the device’s release was pushed back indefinitely. Sources said the team had not settled on whether the device should be friendly and anthropomorphic or minimalist and functional. There were also unresolved debates about whether to offload computation to OpenAI’s cloud, raising privacy red flags.
Background and design dilemmas
The concept behind Ive and OpenAI’s AI device is to create a gadget that serves as a constant companion without demanding your attention. Instead of pulling out a phone, you would speak to the device or use subtle gestures. It would proactively deliver information, like reminding you of appointments, suggesting routes, or providing translations. Its form factor is rumored to be a smooth, camera‑equipped disc or pin that you can wear or hold. The design echoes devices like Humane’s AI Pin and Rabbit R1, both of which promised to replace smartphones with voice‑first AI.
However, building such a device collides with the realities of AI. Running models like GPT‑4 locally requires high‑end chips, large memory and constant power—all difficult to cram into a small, unobtrusive product. Relying on cloud servers solves compute but raises privacy concerns because every query travels to OpenAI’s servers. Jony Ive reportedly insisted on maintaining user privacy, meaning the device should minimize data uploads. This tension remains unresolved.
Quotes and reactions
An anonymous engineer on the project told PYMNTS that the hardest part is deciding the AI’s personality: “Should it be like Siri—polite but distant—or more like a friend? That choice defines the hardware and the experience.”
Sarah Friar said during the acquisition announcement that the device “marks a new substrate for computing,” similar to how the iPhone replaced the flip phone.
Some critics on LinkedIn argued that a screenless device cannot replace a smartphone because people still need to view videos, images and maps. Others believe that an AI companion could complement a smartphone rather than replace it.
Hardware designer and YouTuber Dave2D praised the minimalist concept but doubted it could run GPT‑4 locally: “Unless they’ve built a new kind of chip, physics says you need more space or more heat.”
Privacy advocates warn that voice‑first AI devices risk recording sensitive data. Without a screen, users cannot easily verify what data is captured.
Evidence and design timeline
The following timeline chart summarizes the key milestones in the Jony Ive–OpenAI device project. It shows the initial announcement, acquisition, leaks, delays and current status. The chart underscores the long gestation of the project and illustrates how external factors like compute scarcity impacted the schedule.
Analysis and implications
Ambition vs. feasibility
The idea of a screenless AI device aligns with a broader movement to build ambient computing—technology that dissolves into everyday life. But making this vision real requires hardware breakthroughs. Without a display, the device must rely on sound and subtle visual cues. Micro‑LED or holographic projection might be options, but they add complexity. Compute scarcity exacerbates the challenge: if OpenAI cannot secure enough GPUs for ChatGPT, it’s unclear how it will supply chips for millions of personal devices. Even if the company opts for smaller on‑device models, it would need to compress them significantly, potentially leading to underwhelming experiences.
Privacy and ethics
Ive’s design ethos often prioritizes user privacy (e.g., Apple’s emphasis on on‑device processing). Integrating that ethos with OpenAI’s data‑hungry models is tricky. One approach is to run smaller models locally for simple tasks and use the cloud for complex queries. Another is to implement differential privacy and on‑device encryption. However, the success of the device will depend on whether users trust it. In the age of voice assistants that occasionally record conversations inadvertently, a device that listens constantly could face regulatory scrutiny.
Market context
The AI device enters a crowded field. Humane’s AI Pin, launched earlier this year, promised similar capabilities but received mixed reviews due to high cost and limited functionality. Rabbit’s R1 device, which had a small screen, faced supply chain issues. Meta’s Ray‑Ban Stories integrate AI into glasses but remain a niche. Consumers may be skeptical of new hardware categories after years of gimmicky wearables. For the device to succeed, it needs a killer use case—something that smartphones cannot easily do. The synergy between Ive’s design and OpenAI’s AI could deliver that, but the delay suggests the team hasn’t found it yet.
Financial implications
OpenAI’s financial position adds another layer of pressure. The company recently hit a $500 billion valuation, making it one of the most valuable startups ever. Investors expect bold products. At the same time, compute shortages constrain revenue growth: there are only so many GPUs available, and new ones are expensive. Building a consumer device requires heavy upfront investment and manufacturing expertise that OpenAI lacks. The partnership with LoveFrom offered design talent but not supply chain muscle. This may be why the project has yet to produce a prototype despite the acquisition cost.
What’s next
Reports suggest that the Ive/OpenAI team is now exploring multiple form factors, including a pendant that projects a holographic interface onto surfaces, and a clip‑on device that works with existing smartphones. The company may also wait for more efficient AI chips, such as those based on neuromorphic computing, before releasing the device. In parallel, Apple’s rumored AI glasses and Amazon’s voice wearables could eat into the market. The next update from the Ive–OpenAI project is expected in early 2026. Whether the device ultimately appears or remains a tantalizing rumor, its journey reveals the challenges of integrating cutting‑edge AI with human‑centered design.