Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

Ryan is a senior supervisor at TechForge Media with more than 10 years of involvement covering the most recent innovation and meeting driving industry figures. He can frequently be located at tech gatherings with a solid espresso in one hand and a PC in the other. In the event that it’s quirky, he’s likely into it.

San Francisco-based simulated intelligence startup Human-centered has uncovered Claude 2.1, a move up to its language model that flaunts a 200,000-token setting window — unfathomably outperforming the as of late let 120,000-token GPT-4 model out of OpenAI.

The delivery comes closely following an extended association with Google that gives Human-centered admittance to cutting edge handling equipment, empowering the significant extension of Claude’s setting taking care of capacities.

Leave a Comment