In a recent survey conducted by the venerable MITRE Corporation and the esteemed Harris Poll, it was revealed that the majority of American adults harbor a deep-seated mistrust towards artificial intelligence (AI) tools such as ChatGPT. The survey suggests that the relentless stream of scandals surrounding AI-created malware and disinformation has cast a shadow of doubt over the minds of the public, potentially paving the way for increased calls for AI regulation.
The hallowed halls of the survey show that a mere 39% of the 2,063 U.S. adults surveyed believe that the current AI technology is “safe and secure,” a notable decrease of 9% from the previous survey conducted in November 2022. Specific concerns reverberate through the collective consciousness, with 82% expressing unease about deepfakes and other forms of artificial engineered content, while 80% fear the potential for this technology to be harnessed in malicious malware attacks. An overwhelming majority of respondents also voiced apprehension regarding AI’s involvement in identity theft, the unauthorized harvesting of personal data, the replacement of humans in the workplace, and other alarming prospects.
The heartbeat of the populace pulses with a shared sense of trepidation regarding the impact of AI, spanning across diverse demographic cohorts. While 90% of the venerable boomers fret over the consequences of deepfakes, a significant 72% of the vibrant Gen Z cohort also shares in this unease. Despite the younger generation’s greater familiarity and utilization of AI in their daily lives, concerns persist in various realms, including the urgent need for enhanced public protection and the potential necessity of AI regulation.
As the sun sets on the landscape of AI tools, shadows of doubt loom large over the public consciousness, fueled by the constant barrage of negative headlines involving generative AI tools and the tumultuous sagas surrounding ChatGPT, Bing Chat, and their brethren. Stories of misinformation, breaches of data security, and the specter of malware cast a dark cloud over the once-promising horizon of AI, prompting a palpable shift in public sentiment towards the impending AI era.
In the reverberating echoes of the MITRE-Harris poll, a resounding call for government intervention emerges, with 85% of respondents lending their support to the idea of AI regulation — an increase of 3% from previous sentiments. The prevailing sentiment of 85% echoes the belief that ensuring the safety and security of AI for public use requires a united effort spanning industry, government, and academia, while 72% call for intensified federal focus and investment in AI security research and development.
Amidst the mounting anxiety surrounding AI’s potential role in bolstering malware attacks, a chorus of cybersecurity experts offered their insights, casting doubt upon AI’s current efficacy in this realm. While acknowledging the theoretical possibility of AI-powered malware, experts note its limited capacity to produce effective code, with some speculating that hackers would be more likely to seek vulnerabilities in public repositories than rely on AI assistance.
The evolving landscape of public skepticism towards AI may fundamentally steer the course of industry endeavors, compelling entities such as OpenAI to allocate greater resources towards safeguarding the public from their technological creations. With overwhelming support for regulatory measures, the dawn of governmental AI regulation may loom on the horizon, propelled by the resolute voices of a populace unsettled by the specter of AI’s unchecked power.