In a blog share, Microsoft outlined a five-step plan for public governance over AI, detailing steps like identifying material produced by AI and putting government-led AI security regulations in place from the start.
Microsoft has outlined a plan for public control of AI and requested the creation of a new US body to oversee the field. The company has also expressed its concerns about the security and safety implications of the most recent technology.
We could gain from a fresh new agency, and this is how we will make sure that humanity maintains control of technologies," president of Microsoft, Brad Smith was quoted by Bloomberg as stating during a speech in Washington.
Days after OpenAI CEO Sam Altman expressed his opinion regarding the establishment of an agency that would define guidelines for the implementation of AI, Smith has called for the creation of one whose sole function would be to examine how to govern AI and AI-based tools.
OpenAI is the organization behind generative AI applications like ChatGPT and DALL.E 2.
In his speech, Smith also raised worries about the content that artificial intelligence may produce, saying that deep fakes — false contents that appear to be authentic — could be a significant problem.
Smith was reported by Reuters as saying that we must take precautions to guard against the modification of legitimate content with the intention of misleading or defrauding individuals.
In a different blog post, Smith outlined a five-step plan that would aid in the public governance of AI. This plan calls for the implementation of government-led security frameworks and a system to identify contents that are being produced by AI.
Smith elaborated on his ideas regarding AI safety frameworks, stating that businesses should create their upcoming AI tools in accordance with rules set forth by the government. Smith also mentioned that the United States National Institute of Standards and Technology, a part of the Department of Commerce, has already released a new AI Risk Management Framework.
In order to make sure that responsible AI technologies are deployed, this framework could be used in combination with other measures, according to Smith.
Smith outlined the additional steps and explained the need for efficient security brake for AI system that manage essential infrastructure, for example, the water supply, electrical grid, and traffic patterns in cities.
These fail-safe technologies would be a component of an all-encompassing strategy for system safety that would prioritize effective human control, resilience, and robustness. In essence, they would be comparable to the brake systems that engineers have long incorporated into other technology like escalators, school buses and high-speed trains in order to properly manage not only routine scenarios, but also crises, Smith wrote in the blog post.
Smith also emphasizes the importance of developing a comprehensive regulatory and legal framework founded on the technical architecture for AI in his list of crucial steps.
Smith stated that regulations for AI models as well as AI infrastructure providers must be developed separately and that the law will have to delegate various regulatory obligations to various actors based on their involvement in controlling various components of AI technology.
According to Smith, these laws might really result in customers being able to tell which content was produced by AI.
Creating public-private collaborations to address societal issues brought on by the new technology is among Smith's other steps, which also include making AI available for research.