
In a major expansion of its platform partnerships, OpenAI has officially made its GPT OSS models available on Amazon Web Services. However you see this move, it is giving a little “we’re not exclusive anymore!” vibes. In any case, this is the first time AWS customers got native access to OpenAI developed models.
The launch introduces two open weight models GPT OSS 120B and GPT OSS 20B to Amazon Bedrock and SageMaker. Which positions AWS as a key player in the generative AI ecosystem alongside other cloud providers.
GPT OSS: High Performance, Wide Access
The GPT OSS 120B model features 117 billion parameters and is built for tasks that demand strong reasoning, mathematics, and code generation. The lighter 20B version is designed to be more accessible, capable of running on machines with as little as 16GB of memory, making it viable for a wider range of users and devices.
Both models are released under the Apache 2.0 license, allowing companies to modify, fine tune, and deploy the models as needed. While they are also available for download via platforms like Hugging Face, deploying them through AWS offers the added advantage of direct support and approval from OpenAI.
A Strategic Shift in the Cloud Landscape
OpenAI’s relationship with cloud infrastructure providers will change significantly after this move. What people do not realize is that Microsoft remains OpenAI’s primary partner, with deep integration into Azure and Windows products. However, the availability of these models on AWS suggests Sam Altman might finally be going for broader strategies to diversify partnerships and expand enterprise reach.
Experts report that this deployment is part of ongoing shifts in OpenAI’s commercial relationships. These might very well be linked to renegotiations of its long term partnership with Microsoft. Meanwhile, AWS now joins the ranks of providers offering OpenAI’s technology, adding further depth to its Bedrock and SageMaker AI platforms.
GPT OSS: Timing That Matters
The announcement comes shortly after Amazon CEO Andy Jassy fielded questions during an earnings call about AWS’s position in the fast growing generative AI space. Analysts pointed to the rapid cloud growth of competitors like Microsoft and Google, prompting scrutiny over AWS’s pace of innovation.
Jassy emphasized AWS’s massive scale, and the new OpenAI partnership signals that AWS is actively responding to concerns by expanding its generative AI offerings to meet growing enterprise demand.
OpenAI and Open Source: A Clear Position
OpenAI’s release of these models under the Apache 2.0 license also underscores its current stance on open source. In this regard, we cannot ignore that Meta has signaled it may scale back its open releases. With this launch, OpenAI provides enterprises and developers with powerful models and the flexibility to deploy them across different platforms.
The integration of GPT OSS models into AWS unlocks several benefits for customers:
- Enterprise Ready Deployment: Users can build applications with integrated monitoring, security, and compliance features.
- SageMaker Customization: Fine tune and retrain models within AWS’s machine learning ecosystem.
- Bedrock Flexibility: Leverage GPT OSS alongside models from Anthropic, Meta, Cohere, Mistral, DeepSeek, and AWS’s own offerings.
For OpenAI, expanding its reach across cloud platforms strengthens its position as a leading provider of foundational models. For AWS, it adds a highly sought after name to its AI roster. GPT-AWS collab will also meet rising customer expectations in the GenAI space.