Amazon is purportedly making significant interests in the improvement of a huge language model (LLM) named Olympus.
As indicated by Reuters, the tech goliath is emptying millions into this venture to make a model with a stunning two trillion boundaries. OpenAI’s GPT-4, for examination, is assessed to have around one trillion boundaries.
This move places Amazon in direct contest with OpenAI, Meta, Human-centered, Google, and others. The group behind Amazon’s drive is driven by Rohit Prasad, previous head of Alexa, who currently reports straightforwardly to Chief Andy Jassy.
Prasad, as the head researcher of counterfeit general insight (AGI) at Amazon, has bound together simulated intelligence endeavors across the organization. He got specialists from the Alexa computer based intelligence group and Amazon’s science division to team up on preparing models, adjusting Amazon’s assets towards this aggressive objective.
Amazon’s choice to put resources into creating local models originates from the conviction that having their own LLMs could upgrade the allure of their contributions, especially on Amazon Web Administrations (AWS).