Date:
26/10/2023
Listen to this article:
In an era where Artificial Intelligence (AI) is swiftly advancing, the emphasis on its secure and principled progression is paramount. This sentiment has been fortified by a recent remarkable funding endeavor aimed at AI safety exploration. A consortium of four notable technology entities have united to unveil a funding initiative surpassing $10 million, signifying a noteworthy landmark in advocating for the responsible evolution of AI. The initiative, christened the AI Safety Fund, witnessed the amalgamation of these tech giants as they amalgamated resources to endorse research focused on alleviating risks and scrutinizing the ethical realms intertwined with AI.
This funding endeavor was disclosed on Wednesday, showcasing a burgeoning focus on the ethical and safety deliberations in the swiftly progressing domain of artificial intelligence. Furthermore, this initiative is not an isolated venture. Lately, the Frontier Model Forum, patronized by Microsoft and OpenAI, also pronounced its allegiance to support research into AI safety, with an inaugural funding allocation surpassing $10 million. This forum, having appointed its inaugural director, is tailored towards supporting exploration that delves into the safety and ethical ramifications of AI technologies.
In addition, a discourse released on Tuesday by eminent artificial intelligence researchers has beckoned both AI conglomerates and governmental bodies to designate a minimum of one-third of their AI research and development funding towards ensuring the safety and ethical utilization of AI systems. The AI Safety Fund epitomizes the tech sector's escalating acknowledgement of the significance of responsible AI evolution. It transcends merely advancing the frontiers of AI capabilities but ensuring such advancements resonate with ethical tenets and promote safety. The joint endeavor by the four major tech entities in financing AI safety research is a praiseworthy stride towards cultivating a milieu of responsible AI innovation.
References
- https://www.newsweek.com/four-major-tech-companies-announce-over-10m-new-ai-safety-fund-1838028
- https://finance.yahoo.com/news/microsoft-openai-backed-ai-safety-145646622.html
- https://www.pymnts.com/news/artificial-intelligence/2023/top-ai-researchers-demand-one-third-of-funding-for-ai-safety-regulation
About the author
Evalest's tech news is crafted by cutting-edge Artificial Intelligence (AI), meticulously fine-tuned and overseen by our elite tech team. Our summarized news articles stand out for their objectivity and simplicity, making complex tech developments accessible to everyone. With a commitment to accuracy and innovation, our AI captures the pulse of the tech world, delivering insights and updates daily. The expertise and dedication of the Evalest team ensure that the content is genuine, relevant, and forward-thinking.
Related news
Okta Data Breach Exposes Information of All Customer Support Users
Okta, a leading identity management company, has reported a significant data breach affecting all of its customer support users. This incident raises major concerns about cybersecurity and the safety of user data.
Significant Rise in Websites Blocking Google-Extended: A 180% Jump
A dramatic increase in the number of websites blocking Google-Extended is observed, with a 180% jump in just one month. Prominent websites, including The New York Times and Yelp, are among those opting for this block to prevent their content from being accessed by Google's AI technologies.
FTC Enhances AI Investigation Procedures to Tackle Unlawful Uses
The U.S. Federal Trade Commission (FTC) has streamlined its investigation process for cases involving unlawful use of artificial intelligence (AI), marking a significant move in regulating AI applications.