Organizations are presently creating cutting edge man-made intelligence frameworks, more remarkable and complex than the huge language models – the ongoing business boondocks e.g., GPT-4, Claude 2, PaLM 2, Titan and, on account of picture general, DALL-E 2. A significant number of these may have expansive outcomes on public safety and key basic freedoms.
The draconian EU Act, which was passed by the European Parliament in June 2023 but has not yet become law, uses a risk-based classification system to categorize AI technologies according to the level of risk they pose. The EU Act has established a list of “Prohibited AI practices” for AIs with unacceptable levels of risk. These “Prohibited AI practices” include, among other things, the use of facial recognition technology in public places and AI that may influence political campaigns.
Before putting their “Foundation Models” on the market, they must register them with an EU database. End users must have access to information about copyrighted data used to train generative AI systems, and developers of such systems are obligated to do so. The requirement to disclose content produced by AI is one of the transparency obligations.