Sumability
AboutPricingLog in
    Join us on Discord
    SumabilitySum It Up - Your AI for Content Curation. 🤖

    Feedback

    • Bug Report
    • Feature Request

    © 2023 Sumability.com™. All Rights Reserved.
    Twitter page
    Topics
    AI 🤖Business 💼Cryptocurrency 💰Economy 🏛️Politics 🥸Stock Markets 🤑Technology 🛠️
    Posted 9/18/2023, 12:00:27 PM

    Tech Giants Back AI Licensing as Congress Debates New Regulations

    • Technology companies like Microsoft support a license for AI models before deployment.

    • Congress is considering bills to create a new agency to regulate AI across sectors.

    • Licensing regimes favor large companies and can limit innovation from new players.

    • Europe's heavy regulatory approach to the internet and AI has hampered innovation.

    • Existing laws could address AI concerns without new burdensome requirements.

    reason.com
    Relevant topic timeline:
    7/26/2023
    Capitol Hill Stumps Anthropic on its Google Relationship
    - Capitol Hill is not known for being tech-savvy, but during a recent Senate hearing on AI regulation, legislators showed surprising knowledge and understanding of the topic. - Senator Richard Blumenthal asked about setting safety breaks on AutoGPT, an AI agent that can carry out complex tasks, to ensure its responsible use. - Senator Josh Hawley raised concerns about the working conditions of Kenyan workers involved in building safety filters for OpenAI's models. - The hearing featured testimonies from Dario Amodei, CEO of Anthropic, Stuart Russell, a computer science professor, and Yoshua Bengio, a professor at Université de Montréal. - This indicates a growing awareness and interest among lawmakers in understanding and regulating AI technology.
    8/18/2023
    What normal Americans — not AI companies — want for AI
    A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
    8/22/2023
    Allow patents on AI-generated inventions — for the good of science
    AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
    8/23/2023
    Joe Biden Unveils Aggressive Plans For Mandatory Artificial Intelligence Regulations
    The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
    8/23/2023
    Unlocking AI’s potential for everyone
    Artificial intelligence (AI) has the potential to deliver significant productivity gains, but its current adoption may further consolidate the dominance of Big Tech companies, raising concerns among antitrust authorities.
    8/24/2023
    Regulatory uncertainty overshadows gen AI despite pace of adoption
    The deployment of generation AI (gen AI) capabilities in enterprises comes with compliance risks and potential legal liabilities, particularly related to data privacy laws and copyright infringement, prompting companies to take a cautious approach and deploy gen AI in low-risk areas. Strategies such as prioritizing lower-risk use cases, implementing data governance measures, utilizing layers of control, considering open-source software, addressing data residency requirements, seeking indemnification from vendors, and giving board-level attention to AI are being employed to mitigate risks and navigate regulatory uncertainty.
    8/27/2023
    Hitting the Books: Why AI needs regulation and how we can do it
    In his book, Tom Kemp argues for the need to regulate AI and suggests measures such as AI impact assessments, AI certifications, codes of conduct, and industry standards to protect consumers and ensure AI's positive impact on society.
    8/28/2023
    We don’t have to reinvent the wheel to regulate AI responsibly
    The increasing investment in generative AI and its disruptive impact on various industries has brought the need for regulation to the forefront, with technologists and regulators recognizing the importance of ensuring safer technological applications, but differing on the scope of regulation needed. However, it is argued that existing frameworks and standards, similar to those applied to the internet, can be adapted to regulate AI and protect consumer interests without stifling innovation.
    8/30/2023
    Britain must become a leader in AI regulation, say MPs
    The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
    8/31/2023
    Pass AI law soon or risk falling behind, MPs warn
    UK's plan to lead in AI regulation is at risk of being overtaken by the EU unless a new law is introduced in November, warns the Commons Technology Committee, highlighting the need for legislation to avoid being left behind.
    9/3/2023
    From China to Brazil, here’s how AI is regulated around the world
    Artificial intelligence regulation varies across countries, with Brazil focusing on user rights and risk assessments, China emphasizing "true and accurate" content generation, the EU categorizing AI into three risk levels, Israel promoting responsible innovation and self-regulation, Italy allocating funds for worker support, Japan adopting a wait-and-see approach, and the UAE prioritizing AI development and integration.
    9/5/2023
    Experts alone can't handle AI – social scientists explain why the public needs a seat at the table
    The rapid advancement of AI technology poses significant challenges for democratic societies, including the need for nuanced debates, public engagement, and ethical considerations in regulating AI to mitigate unintended consequences.
    9/5/2023
    Computer science experts say US should create new fed agency for AI: Survey
    A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
    9/5/2023
    U.S. Copyright Office Invites Public To Comment On AI
    The United States Copyright Office has launched a study on artificial intelligence (AI) and copyright law, seeking public input on various policy issues and exploring topics such as AI training, copyright liability, and authorship. Other U.S. government agencies, including the SEC, USPTO, and DHS, have also initiated inquiries and public forums on AI, highlighting its impact on innovation, governance, and public policy.
    9/6/2023
    AI will scramble geopolitical power if left unchecked, one executive says
    Mustafa Suleyman, CEO of Inflection AI, argues that restricting the sale of AI technologies and appointing a cabinet-level regulator are necessary steps to combat the negative effects of artificial intelligence and prevent misuse.
    9/7/2023
    Market concentration implications of foundation models: The Invisible Hand of ChatGPT
    The market for foundation models in artificial intelligence (AI) exhibits a tendency towards market concentration, which raises concerns about competition policy and potential monopolies, but also allows for better internalization of safety risks; regulators should adopt a two-pronged strategy to ensure contestability and regulation of producers to maintain competition and protect users.
    9/8/2023
    Lawmakers Caution About AI's Risks and Urge Strategic Approach as China Seeks to Lead
    Lawmakers in the Senate Energy Committee were warned about the threats and opportunities associated with the integration of artificial intelligence (AI) into the U.S. energy sector, with a particular emphasis on the risk posed by China's AI advancements and the need for education and regulation to mitigate negative impacts.
    9/8/2023
    Bipartisan Senators Propose Framework to Regulate AI Companies
    Two senators, Richard Blumenthal and Josh Hawley, have released a bipartisan framework for AI legislation that includes requiring AI companies to apply for licensing and clarifying that a tech liability shield would not protect these companies from lawsuits.
    9/8/2023
    Tech Industry Lobbyists Push to Influence State AI Laws, Seeking to Avoid Restrictive Regulations
    Tech industry lobbyists are turning their attention to state capitals in order to influence AI legislation and prevent the imposition of stricter rules across the nation, as states often act faster than Congress when it comes to tech issues; consumer advocates are concerned about the industry's dominance in shaping AI policy discussions.
    9/10/2023
    GOP Congressman Seeks to Ban Government Use of AI for Law Enforcement
    Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
    9/11/2023
    Countries Move Forward on AI Governance as ChatGPT Sparks Privacy Concerns
    Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
    9/12/2023
    Tech Giants Pledge Voluntary AI Safety Measures in White House Talks
    Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
    9/12/2023
    As AI Advances Outpace Regulation, Governments Race to Adapt
    AI has the potential to fundamentally change governments and society, with AI-powered companies and individuals usurping traditional institutions and creating a new world order, warns economist Samuel Hammond. Traditional governments may struggle to regulate AI and keep pace with its advancements, potentially leading to a loss of global power for these governments.
    9/12/2023
    China's Targeted Approach to AI Regulation Offers Lessons for U.S. Policymakers
    China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
    9/13/2023
    Tech Moguls Meet With Congress to Shape AI Regulation
    The CEOs of several influential tech companies, including Google, IBM, Microsoft, and OpenAI, will meet with federal lawmakers as the US Senate prepares to draft legislation regulating the AI industry, reflecting policymakers' growing awareness of the potential disruptions and risks associated with AI technology.
    9/13/2023
    California Senator Proposes AI Regulation Bill for Oversight and Transparency
    California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
    9/13/2023
    Elon Musk Pushes for New Government AI Safety Agency in Washington Meetings
    Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
    9/13/2023
    Lawmakers Explore AI Regulation Ideas with Tech Executives
    The nation's top tech executives, including Elon Musk, Mark Zuckerberg, and Sundar Pichai, showed support for government regulations on artificial intelligence during a closed-door meeting in the U.S. Senate, although there is little consensus on what those regulations should entail and the political path for legislation remains challenging.
    9/14/2023
    Silicon Valley-backed Oxford movement alarms UK policymakers with AI doomsday views
    The UK government is showing increased concern about the potential risks of artificial intelligence (AI) and the influence of the "Effective Altruism" (EA) movement, which warns of the existential dangers of super-intelligent AI and advocates for long-term policy planning; critics argue that the focus on future risks distracts from the real ethical challenges of AI in the present and raises concerns of regulatory capture by vested interests.
    9/15/2023
    Spain Establishes Europe's First AI Policy Task Force to Guide Responsible Technology Development
    Spain has established Europe's first artificial intelligence (AI) policy task force, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), to determine laws and provide a framework for the development and implementation of AI technology in the country. Many governments are uncertain about how to regulate AI, balancing its potential benefits with fears of abuse and misuse.
    9/15/2023
    AI Industry Should Launch Proactive Public Campaign to Shape Policy Before Risks Emerge, Following Crypto's Regulatory Struggles
    The AI industry should learn from the regulatory challenges faced by the crypto industry and take a proactive approach in building relationships with lawmakers, highlighting the benefits of AI technology, and winning public support through campaigns in key congressional districts and states.
    9/17/2023
    Tech Policy Debates Heat Up as AI Advances, But Consensus Remains Elusive
    Tech leaders gathered in Washington, DC, to discuss AI regulation and endorsed the need for laws governing generative AI technology, although there was little consensus on the specifics of those regulations.
    9/17/2023
    Bill Gurley Warns of Dangers of Regulatory Capture Stifling AI Innovation
    Venture capitalist Bill Gurley warns about the dangers of regulatory capture and its impact on innovation, particularly in the field of artificial intelligence, and highlights the importance of open innovation and the potential harm of closed-source models.
    9/15/2023
    Federal Agencies See Promise in AI But Need Guidance on Responsible Use
    The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
    9/18/2023
    Tech Giants and Nations Compete to Control AI Geography, Raising Concerns Over Centralized Power
    The geography of AI, particularly the distribution of compute power and data centers, is becoming increasingly important in global economic and geopolitical competition, raising concerns about issues such as data privacy, national security, and the dominance of tech giants like Amazon. Policy interventions and accountability for AI models are being urged to address the potential harms and issues associated with rapid technological advancements. The UK's Competition and Markets Authority has also warned about the risks of industry consolidation and the potential harm to consumers if a few firms gain market power in the AI sector.
    9/19/2023
    Global Effort to Regulate AI Advances Amid ChatGPT Concerns
    Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
    9/19/2023
    Most Americans Want Limits on AI Despite Tech Industry Push for Superintelligence
    A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
    9/19/2023
    China's New AI Rules Walk Fine Line Between Innovation and Regulation
    China's new artificial intelligence (AI) rules, which are among the strictest in the world, have been watered down and are not being strictly enforced, potentially impacting the country's technological competition with the U.S. and influencing AI policy globally; if maximally enforced, the regulations could pose challenges for Chinese AI developers to comply with, while relaxed enforcement and regulatory leniency may still allow Chinese tech firms to remain competitive.
    9/19/2023
    AI Pioneer Calls for Practical Regulations to Address AI's Real-World Risks
    While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
    9/20/2023
    UN pushes for global AI regulation, but experts divided on whether it can work
    Wikipedia founder Jimmy Wales believes that regulating artificial intelligence (AI) is not feasible and compares the idea to "magical thinking," stating that many politicians lack a strong understanding of AI and its potential. While the UN is establishing a panel to investigate global regulation of AI, some experts, including physicist Reinhard Scholl, emphasize the need for regulation to prevent the misuse of AI by bad actors, while others, like Robert Opp, suggest forming a regulatory body similar to the International Civil Aviation Organisation. However, Wales argues that regulating individual developers using freely available AI software is impractical.
    9/21/2023
    Pennsylvania Launches New Initiative to Safely Adopt AI in State Government Under Governor Shapiro
    The Pennsylvania state government is preparing to incorporate artificial intelligence into its operations, with plans to convene an AI governing board, develop training programs, and recruit AI experts, according to Democratic Gov. Josh Shapiro.
    9/21/2023
    UN Explores New International Agency to Govern AI, Reduce Risks
    The United Nations is considering the establishment of a new agency to govern artificial intelligence (AI) and promote international cooperation, as concerns grow about the risks and challenges associated with AI development, but some experts express doubts about the support and effectiveness of such a global initiative.
    9/21/2023
    Leaders Harness AI to Augment Workforces, Not Replace Jobs
    AI adoption is rapidly increasing, but it is crucial for businesses to establish governance and ethical usage policies to prevent potential harm and job loss, while utilizing AI to automate tasks, augment human work, enable change management, make data-driven decisions, prioritize employee training, and establish responsible AI governance.
    9/22/2023
    Experts Sound Alarm on Unchecked AI Dangers, Call for Urgent Government Regulation
    The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
    1. Home
    2. >
    3. AI 🤖