OpenAI Smartphone Release Date: 2028 Confirmed? Full Timeline & First Device Revealed

OpenAI’s smartphone is expected to enter mass production in 2028, according to analyst Ming-Chi Kuo. The company’s first hardware announcement will happen in H2 2026, focusing on a screenless wearable device called Sweetpea.
Sam Altman built the most-used AI product on the planet. Now he wants to build the hardware you carry it on.
On April 27, 2026, supply chain analyst Ming-Chi Kuo — the same person who accurately predicted every major Apple product before announcement — published a detailed breakdown of OpenAI’s smartphone ambitions. The AI industry stopped scrolling. Qualcomm shares jumped 9% that afternoon. Apple dipped 1.9%. Luxshare gained 10% in Shenzhen.
But here’s what most coverage missed: OpenAI isn’t building one device. They’re building a family of AI hardware. The phone is real. It’s also three years away. And the device that actually ships this year looks nothing like a smartphone.
I’ve gone through every executive statement, supply-chain leak, court filing, and analyst report. This is the only guide that maps the complete OpenAI hardware timeline — what’s confirmed, what’s rumored, and what it actually means for the devices you carry today.
| Device | Codename | Expected Launch | Form Factor | Status |
|---|---|---|---|---|
| OpenAI Wearable (1st device) | Sweetpea | H2 2026 announcement / 2028 mass production | AI earbuds / wearable audio | Confirmed by OpenAI exec |
| OpenAI Pen Device | Gumdrop | 2027–2028 | Pen-form AI companion | Leaked (limited details) |
| OpenAI Smartphone | Unknown | 2028 mass production | AI-first phone, no app stores | Analyst report (Ming-Chi Kuo) |
Why OpenAI Is Building Hardware Now
ChatGPT is approaching one billion weekly users. That’s an enormous installed base — but every single one of those users accesses it through hardware OpenAI doesn’t control.
Apple and Google own the app pipeline. They decide what level of system access apps get. They control permissions, sensors, microphone access, context awareness. When you use ChatGPT on an iPhone, Apple is the gatekeeper between OpenAI and your real-world behavior.
Kuo’s argument is simple: a phone that OpenAI builds and controls would let it integrate AI at the hardware layer — not as an app sitting on top of someone else’s OS. No restrictions on what the model can see. No permission dialogs every time the AI needs to understand your context. No revenue share with an app store.
There’s also a financial logic. OpenAI reportedly doesn’t expect profitability until 2030. Hardware subscriptions bundled with the device could accelerate that timeline significantly. Altman himself has said the goal is to produce 100 million devices “faster than any company has ever shipped 100 million of something new.”
Bold claim. The question is whether they can execute — and whether users actually want it.
OpenAI’s First Device: Sweetpea Is Not a Smartphone

Before we get to the phone, let’s talk about what OpenAI is actually shipping first.
The primary launch device is codenamed Sweetpea. Multiple supply-chain reports from Asian publications, corroborated by leaks published by Smart Pikachu and covered by Hypebeast, Gizmochina, and Android Central, describe it as a screenless, behind-the-ear AI wearable. Not earbuds in the traditional sense — something closer to a metal egg you wear on your ear, with two pill-shaped ceramic modules.
Key specs that have leaked so far:
| Spec | Detail | Source confidence |
|---|---|---|
| Form factor | Metal eggstone main body + behind-ear pill modules | Multiple leaks, high confidence |
| Chip | 2nm Exynos-class SoC + custom AI silicon | Leaked, not confirmed by OpenAI |
| Interface | Voice-first, no wake word, always-on context awareness | Confirmed by Altman statements |
| Cameras / mics | Environmental sensing — sees and hears surroundings | Multiple reports, confirmed direction |
| Screen | None | Confirmed by Altman + Ive philosophy |
| Manufacturer | Foxconn (Vietnam / US, not China) | Supply chain reports, Luxshare dropped |
| Target production | 40–50 million units first year | Leaked, ambitious target |
| Estimated launch | Announcement H2 2026, shipping 2027–2028 | Mixed signals (court filing says not before Feb 2027) |
Altman described it in November 2025 as something “peaceful and calm” — the opposite of the notification-saturated experience of a smartphone. He and Jony Ive have talked about “calm computing” — a design philosophy built around reducing screen addiction rather than feeding it.
One quote stuck with me. Altman said: “You can then go for a vibe that is not like walking through Times Square and getting bumped into… but like sitting in the most beautiful cabin by a lake.”
That’s not a phone. That’s a different relationship with AI entirely.
A second device, codenamed Gumdrop, is described as a pen form factor. Details are thin. It’s likely part of the broader “family of devices” strategy rather than the primary launch.
The Smartphone Play: What Kuo’s Report Actually Says
Now the headline everyone ran with on April 27: the smartphone.
Kuo’s analysis isn’t based on insider OpenAI knowledge — it’s based on supply-chain checks. When chipmakers and manufacturers start talking, Kuo listens. And what the supply chain is saying is that OpenAI has begun preliminary work on a smartphone with AI agents replacing traditional apps.
Here’s what the report confirmed:
| Element | Detail |
|---|---|
| Chip partners | MediaTek (cost efficiency) + Qualcomm (flagship performance) |
| Manufacturing | Luxshare Precision — co-design and final assembly |
| Spec finalization | Late 2026 or Q1 2027 |
| Mass production start | 2028 |
| Core concept | AI agents replace app store model entirely |
| Context awareness | On-device + cloud model hybrid for real-time state tracking |
The market reaction was immediate. Qualcomm at +9% signals that investors believe this is serious. Apple’s -1.9% dip was relatively modest — the market doesn’t think Apple is in immediate danger, but the direction of travel registered.
The concept Kuo describes is architecturally interesting. Instead of apps, AI agents handle tasks directly. You don’t open Instagram — an AI agent with social media access shows you what matters. You don’t open Google Maps — context-aware location reasoning tells you where to go before you ask. The phone becomes a substrate for AI behavior, not a grid of icons you navigate yourself.
This is a genuine paradigm shift if it works. It’s also exactly the kind of product that has failed spectacularly before.
The Risk Nobody Is Talking About: The Humane AI Pin Problem

OpenAI is walking into a hardware graveyard with confidence. That confidence might be justified. It also might be misplaced.
The Humane AI Pin launched in 2024. It was a screenless, always-on AI wearable — contextually aware, voice-first, designed to replace smartphone interactions. The reviews were brutal. Battery life was terrible. The AI made basic errors. Users returned it at high rates. The company was eventually absorbed.
The Rabbit R1 had a similar arc. Great demo, poor execution, rapid abandonment.
Meta’s Ray-Ban smart glasses are the one success story in AI wearables — and they succeeded because they started simple. Music, photos, basic AI queries. No attempt to replace the phone. Just augment it.
OpenAI has three advantages Humane didn’t:
- Jony Ive, whose design credibility is unmatched in consumer hardware
- ChatGPT brand recognition at near-iPhone levels of consumer awareness
- A model capable of genuine contextual reasoning, not just pattern matching
But brand and talent aren’t enough if the hardware misbehaves. The court filing that pushed the timeline to “not before February 2027” reportedly cited unresolved issues around the device’s “personality,” data privacy handling, and computing infrastructure. Those aren’t manufacturing delays. Those are fundamental product questions.
I’m cautiously optimistic. OpenAI has earned skepticism from the hardware graveyard, but they’ve earned respect from what GPT-5.5 can actually do. The gap between the two feels narrower than it did when Humane shipped.
Jony Ive’s Role — and Why It Actually Matters
In May 2025, OpenAI acquired Ive’s startup io Products for $6.4–6.5 billion. That’s a serious number for a company with no shipped products. What OpenAI bought wasn’t a prototype. It was a philosophy and the person who executed it across iPod, iPhone, and iPad.
Ive is leading design across both companies now. Tang Tan, a former senior Apple designer, heads hardware as OpenAI’s hardware chief. The company reportedly hired dozens of former Apple engineers in 2025 specifically for device design, user interface, audio, and manufacturing.
The “calm computing” framing is Ive’s influence. Apple under Ive was obsessed with removing complexity — one button on an iPod, one surface on an iPhone. The Sweetpea philosophy follows the same logic applied to AI: if the model is intelligent enough, the interface can disappear entirely.
Whether that works in practice depends entirely on the model. The iPhone succeeded because multitouch was genuinely better than a stylus. Sweetpea will succeed only if conversational AI is genuinely better than tapping an app — in all the messy, real-world situations where you actually need it.
OpenAI vs. Apple: A Real Threat or Distant Competition?

Let’s be precise about the competitive threat, because the framing matters.
In 2026, Apple is not in immediate danger. The iPhone has 1.4 billion active devices. iOS 26 is shipping Siri improvements powered by Gemini. The app ecosystem is 17 years deep. Apple’s hardware integration — custom silicon, tight OS control, privacy architecture — is something OpenAI won’t replicate in the first generation of products.
The 2028 smartphone is a different calculation. If OpenAI ships a device that genuinely offers better AI-native functionality than an iPhone with ChatGPT installed, it creates a real choice for early adopters. Not mass replacement — a beachhead.
The more interesting competitive dynamic is the one nobody is writing about: this is a direct challenge to the app store model itself. Apple makes roughly $22 billion annually from the App Store. Google makes a similar figure from the Play Store. If AI agents can replace app discovery, purchase, and interaction — both companies lose a substantial revenue stream, not just market share.
That’s the real threat. Not “OpenAI phone vs. iPhone.” The threat is “AI agents vs. app stores as a business model.”
For everything I’ve written about how AI agents work and what they’re replacing, this is one of the clearest real-world examples of the shift from tool to autonomous executor. And for the broader context of where AI tools sit in 2026, the best AI tools guide covers the ecosystem OpenAI is trying to disrupt from the hardware layer up.
Full OpenAI Hardware Timeline: Every Confirmed Date
| Date | Event | Source |
|---|---|---|
| May 2025 | OpenAI acquires Jony Ive’s io Products for $6.4B | CNBC |
| Nov 2025 | Altman says device will be “peaceful, calm” — prototypes confirmed | CNBC / Fortune |
| Jan 2026 | Sweetpea earbuds leak — 2nm chip, eggstone design, Foxconn manufacturing | Multiple Asian publications |
| Jan 21, 2026 | OpenAI CFO confirms H2 2026 hardware announcement at Davos | Axios |
| Feb 2026 | Court filing: device “won’t ship before February 2027” | BuiltIn / court documents |
| Apr 27, 2026 | Ming-Chi Kuo reveals smartphone plans — MediaTek + Qualcomm + Luxshare | Kuo on X / TechCrunch |
| Apr 27, 2026 | Altman posts: “good time to rethink operating systems and UI design” | X / MacRumors |
| H2 2026 | First hardware announcement (not shipping) | Confirmed by OpenAI exec |
| Late 2026 / Q1 2027 | Smartphone chip specs finalized | Ming-Chi Kuo |
| Early 2027 | Earliest possible Sweetpea shipping (court filing estimate) | Court documents |
| 2028 | Smartphone mass production begins (40–50M units target) | Ming-Chi Kuo |
What OpenAI’s Hardware Means for AI Models and Agents

There’s a downstream effect here that matters for anyone tracking AI development: hardware control changes what models can do.
Right now, GPT-5.5 and Claude Opus 4.7 are powerful but context-blind. They know what you type. They don’t know where you are, what you’re looking at, whether you’ve been walking or sitting, who you spoke to an hour ago. Context-aware AI agents — the kind Kuo describes as the core of the OpenAI phone — require continuous sensor input that smartphones can provide but apps can’t always access at the depth needed.
Own the hardware, own the context. Own the context, build AI that’s actually useful across the full day — not just when you open a chat window.
This is why the MCP vs. A2A protocol debate matters in this context — the infrastructure for agent-to-agent and model-to-context communication is being built right now, in parallel with the hardware that will eventually run it. And for anyone building AI workflows today, AI workflow automation tools are the current best approximation of what OpenAI is trying to bake into the OS layer.
Best Alternative: If You Can’t Wait Until 2028
Let’s be honest: most people reading this won’t wait three years for an OpenAI phone before making hardware decisions.
Here’s how the current landscape stacks up if AI-native hardware matters to you right now:
| Device / Platform | AI capability today | Best for | Limitation vs. OpenAI vision |
|---|---|---|---|
| iPhone with ChatGPT / Claude | Strong — GPT-5.5, Opus 4.7 accessible | Most users, best ecosystem | App layer — no hardware AI integration |
| Meta Ray-Ban smart glasses | Good — Meta AI, real-time vision | Hands-free context awareness | Limited compute, no autonomous agents |
| Android + Gemini 3.1 Ultra | Very strong — 2M context, multimodal | Google Workspace users | Google controls the stack, not you |
| OpenAI Sweetpea (2027+) | Unknown — prototype phase | Early adopters of AI-first computing | Not yet available, unproven category |
My honest take: if your primary interest is using the best AI available today, the device matters less than the model. The best AI chatbots in 2026 are accessible from any smartphone. The OpenAI hardware story is about where AI computing is heading — not where it is right now.
Is It Worth Waiting For?
The honest answer depends on what you want from AI hardware.
If you want always-on, truly context-aware AI that understands your environment without being prompted — yes, the OpenAI hardware vision is worth watching closely. Nothing in the current market does this at the level Sweetpea is targeting.
If you want a better smartphone experience with AI features — the 2028 phone is genuinely interesting but speculative. Three product cycles in smartphones is an eternity. A lot changes between now and 2028, and OpenAI’s execution track record in hardware is zero. The models are exceptional. The hardware is unproven.
The category that concerns me most is the screenless wearable segment. Humane failed. Rabbit failed. Meta Ray-Ban succeeded but played it safe. Sweetpea is attempting something more ambitious than any of them. The technology is better now. The question is whether users are ready for a device that listens and watches all day in exchange for a more seamless AI experience.
Some will be. Many won’t.
Who Should Pay Attention Right Now
Developers: The agent-first OS concept Kuo describes will create an entirely new app ecosystem — or rather, an agent ecosystem. If you’re building AI agents today, the OpenAI hardware platform could become a distribution channel. The enterprise agent deployment patterns being established now will likely map onto whatever OpenAI ships.
Business owners: The shift from app stores to agent-native platforms affects every business with a mobile app. Not in 2026. Probably not in 2028. But the direction is clear — and businesses that start thinking in terms of agent interfaces rather than app interfaces will have an advantage when the platform shift arrives.
AI enthusiasts and early adopters: The H2 2026 announcement will likely be Sweetpea or a smart speaker variant. If you want to be in the first wave, watch for that announcement. Based on the court filing timeline, shipping will be early 2027 at the earliest.
Regular consumers: Keep your iPhone. The OpenAI smartphone is years away. The category is unproven. When it ships, you’ll have two years of real-world reviews before you need to make a decision. That’s the rational play.
Frequently Asked Questions
When will the OpenAI phone release?
Mass production of OpenAI’s smartphone is scheduled to begin in 2028, according to analyst Ming-Chi Kuo. The first hardware announcement from OpenAI is expected in the second half of 2026, but that will be a screenless wearable device (codenamed Sweetpea), not a smartphone. The phone itself won’t be in consumers’ hands before 2028 at the earliest.
What is the OpenAI Sweetpea device?
Sweetpea is the codename for OpenAI’s first consumer hardware product — a screenless, AI-powered wearable described as an earbud-style device with a metal eggstone-shaped main body and behind-ear modules. It’s designed by Jony Ive’s team and runs on a 2nm chip with custom AI silicon. It aims to replace many smartphone interactions through always-on voice and environmental awareness.
Is OpenAI making a smartphone to replace the iPhone?
OpenAI is developing a smartphone with MediaTek and Qualcomm chips and Luxshare as the manufacturing partner, according to Ming-Chi Kuo. The concept would replace traditional apps with AI agents. However, this is not a confirmed product — it’s based on supply-chain analysis. Mass production is not expected before 2028, and full specs won’t be finalized until late 2026 or early 2027.
Who is designing the OpenAI device?
Jony Ive, the former Apple design chief behind the iPod, iPhone, and iPad, is leading design across OpenAI’s hardware efforts. OpenAI acquired his startup io Products in May 2025 for $6.4 billion. Tang Tan, another former Apple senior designer, serves as hardware chief at OpenAI.
Will the OpenAI device have a screen?
No — the first OpenAI device (Sweetpea) is explicitly screenless. OpenAI’s design philosophy, articulated by both Altman and Ive, centers on “calm computing” — removing screen-based interfaces in favor of voice and contextual AI. The smartphone rumored for 2028 would have a screen, but its UI would be driven by AI agents rather than traditional apps.
The Bigger Picture: AI Is Moving Off the Screen
Every major AI lab is racing to make models smarter. OpenAI is making a different bet: that the next competitive moat isn’t a better model. It’s a better environment for the model to operate in.
Hardware that OpenAI controls means context that no competitor can replicate. A phone that’s built from the ground up around AI agents means experiences that an app running on someone else’s OS will never match. And 40–50 million units in year one — if they can actually ship that — means an installed base that creates a platform, not just a product.
Whether it works depends on execution, timing, and whether consumers are ready to trade the familiar app grid for something genuinely different. I think some will be. The question is how many, and how fast.
The AI models are ready. The hardware is next. And 2028 is closer than it sounds.
For the latest on what these AI models can actually do before the hardware arrives, the Claude vs. GPT vs. Gemini comparison covers the 2026 model landscape in detail. And if you’re tracking how AI agents are already reshaping business workflows today — without waiting for the phone — the WhatsApp AI agents guide shows what’s deployable right now.






