At a glance.
- A case against banning TikTok.
- A US national strategy for AI.
A case against banning TikTok.
While AI has taken center stage in recent weeks, world governments are still scrutinizing the safety of video-streaming platform TikTok. Owned by Chinese-based firm ByteDance, many say the extremely popular app could pose a threat to national security, especially given the copious amount of user data its developers have access to. Several democracies, including the US, have already banned the platform on government devices and are considering blanket bans that would prevent use of TikTok by any of their citizens. World Politics Review offers a case against such action, noting that such restrictions are typically reserved for authoritarian regimes. The author notes that TikTok is far from the only online platform that processes and shares massive amounts of user data, and that, for example, Facebook was already hit with a $600 million fine for its questionable data handling during the Cambridge Analytica scandal. As well, the writer notes that while China’s history of espionage and censorship does allow reason for concern, there is no actual evidence that TikTok’s operators have been colluding with the Chinese government. What’s more, even the current government bans will be extremely difficult to enforce, and ensuring complete compliance on a full ban would require the creation of extensive technological tools and regulatory measures, as well as restrictions on potential workarounds that democracies have never employed. Instead of blanket bans, the author suggests regulators should collaborate with tech companies like TikTok, to try to implement other safeguards and independent security assessments.
A US national strategy for AI.
The White House yesterday announced a new plan focused on encouraging safer innovation in the field of artificial intelligence. A press release reads, “President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy.” The New York Times notes that the plan was released on the same day that Vice President Kamala Harris hosted a summit with representatives from AI leaders Alphabet, Anthropic, Microsoft, and OpenAI to discuss the tech sector’s responsibilities in ensuring AI products are used in a trustworthy and ethical manner. “The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Vice President Harris stated. “And every company must comply with existing laws to protect the American people.”
The plan and summit are just the most recent in a series of steps US officials are taking to better regulate the burgeoning technology. The Biden administration also announced it will be hosting a hacking exercise focused on probing generative AI systems at this summer’s Defcon security conference, and a $140 million grant has been dedicated to the establishment of seven new National AI Research Institutes. As well, Wired adds, Democrat senator Michael Bennett last week introduced a bill that would establish an AI task force charged with protecting citizens' rights. And four regulatory agencies including the Federal Trade Commission and Department of Justice have released a joint pledge to leverage current laws to fight the abuse of AI. Democrat senator Ron Wyden says he will once again attempt to pass the Algorithmic Accountability Act, which calls for more transparency from companies about the algorithms and automated systems supporting their platforms and products. Sarah Myers West, managing director of the AI Now Institute, warns that while the government’s focus on AI is promising, other voices in the conversation must also be heard. “We would be remiss to take an approach that leaves it to them to lead the conversation on what constitutes trustworthy and responsible innovation,” she says. “It’s for regulators and the broader public to define what responsible development of technology looks like.”
We received a number of comments on the US strategy from industry experts. Craig Burland, CISO at Inversion6, sees the advance of AI as inevitable, something to be managed and not stopped:
"There’s no putting the AI genie back in the bottle. Two years ago, if your product didn’t have AI it was considered last-generation. From SIEM to EDR, products had to have AI / ML. Now, ChatGPT is evoking fears pulled from science fiction movies. Generative AI (GAI) is an evolution of technology that started when we jumped into Big Data. GAI has tremendous potential and troubling downsides. But, the government will be hard-pressed to curtail building new models, slow expanding capabilities, or ban addressing new use cases. These models could proliferate anywhere on the globe. Clever humans will find new ways to use this tool – for good and bad. Any regulation will largely be ceremonial and practically unenforceable."
Adam Rusho, Field CTO at Clumio, sent an appreciation of the US strategy:
“As many forward-leaning companies now leverage AI-native technologies or integrations that rely on vast data lakes, the security, compliance, and integrity of this foundational data must also become an increasingly important aspect of the regulatory discussion - especially among businesses in highly regulated industries. This can include flagging anomalies in patient data to identify potential health issues using large language models (LLMs), widely deploying machine learning (ML) models to detect irregular activity or fraud within financial services, and analyzing sequences for genetic research in life sciences, processing massive amounts of data to find minute differences or unobvious patterns. All such industries are subject to stringent data compliance requirements around retention, encryption, storage, and privacy.
"With that, there are additional initiatives organizations must take as AI technology expands and the talks of AI regulations become a reality - such as making sure backup solutions can scale to their AI and ML needs. With terabytes of new data being generated every day, an organization's data resilience platform should be able to scale to petabytes of data and track millions of events and changes in its data with high fidelity. Organizations should also test their recoverability to ensure that it meets their service-level agreements, and review their total cost of ownership. This means investigating cloud-native, cost-effective solutions for long-term retention needs and taking stock of how much overhead is going into managing copies of data (versions, replicas, archives, vaults). As much data growth as we’ve seen in the last few years, the next few years will bring orders of magnitude more. AI is just one of the technology trends intertwined with data at scale, and we must take proper measures to ensure its protection along the way.”
Ani Chaudhuri, CEO of Dasera, argues that data security will necessarily play a central role in the US strategy:
"In light of the recent announcement made by the Biden-Harris Administration, it is evident that the US government has taken some essential steps to promote responsible AI innovation while protecting Americans' rights and safety. While these actions are commendable, it is crucial to emphasize that data security plays a vital role in ensuring AI's responsible and ethical use.
"As the Administration engages with CEOs of leading AI companies, it is essential to remember that responsible and ethical AI development requires robust security measures. Data security companies play a significant part in this landscape, working diligently to protect sensitive information and mitigate risks associated with AI technologies.
"The new investments in AI research and development, public assessments of generative AI systems, and policies to ensure responsible AI use by the US government are all necessary steps to create a safer AI ecosystem. However, investing in data security infrastructure and prioritizing collaboration with data security companies is vital. In doing so, the government and AI industry can ensure comprehensive protection against risks and potential harm to individuals and society.
"Furthermore, AI developers must be held accountable for the security of their products, emphasizing their responsibility to make their technology safe before deployment or public use. This includes proper data management, secure storage, and measures to prevent unauthorized access to sensitive information.
"The Biden-Harris Administration's actions to promote responsible AI innovation are crucial for a safer future. However, it is equally important to acknowledge the role of data security companies in this landscape and foster partnerships to ensure a comprehensive and cohesive approach to AI-related risks and opportunities."
Lou Steinberg, founder and Managing Partner of CTM Insights, offered some thoughts on how artificial intelligence will affect life online. These effects will be broad. He organizes his commentary under four heads:
- AI will search. "AI will change how we search online. There is a clear need for answers to questions, not hundreds of links to read to find an answer. Tools like Microsoft's Bing/GPT and Google's Bard let you ask a question and get a complete (but not always correct) answer. This is why OpenAI's ChatGPT is estimated to have grown to 100 million users just 2 months after it launched."
- AI will advise. "That will lead to giving advice. Today we might ask "what new car is the most reliable?" to help us decide what to buy. In the future, the question will be "what car should I buy and where can I get it at the best price?" As data is accumulated about individuals (hello, Google and Amazon), the advice will be personalized. Giving advice will affect more than car shopping sites; we will see AI driven advice from financial planners, college counselors, even travel and restaurant reservation sites. In fact, we may not need people doing jobs that distill information and advise."
- AI will manage tasks. "Beyond providing answers, these tools can be assigned a task and provide a result. Early examples might be debugging software or writing songs, but the ability to produce new things is game-changing in many fields. Maybe I need to invent a new recipe based on what's in my cabinet. Maybe I need to write a blog, or a movie script, or software. We won't need to write down and reuse things like recipes, we can invent new customized versions every time we want something."
- And the first three activities can be significantly shaped by bias. "Giving advice and completing tasks is a strength that can lead to a weakness. People with agendas might try to bias the training data to get you to buy their car, stay in their hotel, or eat at their restaurant. Intentional bias may creep in as the next generation of "product placement" ads. Hackers may change training data to advise that you buy a stock they are selling at an inflated price. They may teach AI to write software with built-in security issues. At the nation-state level, adversaries can look to destabilize society by advising large numbers of people to make bad choices. Protecting training data will be a new imperative for cyber security teams."
Kev Breen, Director of Cyber Threat Research at Immersive Labs, sees the big shift in AI as it's move toward more general accessibility:
"While the actual risks have changed very little in the last decade, what’s new is that AI tools have become much more accessible and capable recently, catalyzing major changes to our society and cybersecurity posture. What was once a powerful technology reserved for data science teams with access to large computer data centers, has become flexible enough that powerful AI assistants can now run entirely on mobile devices.
"Privacy has been top of mind for organizations worried about the impact of these AIs in the workplace. Just recently Samsung banned all employees from using it after fears that it was ingesting sensitive corporate data. Italy also took a whole country approach blocking OpenAI from its citizens with concerns over its data handling and GDPR. OpenAI opened the door to these large powerful, capable assistants and seems to have started a global technological arms race with large US tech giants vying to be best in class, and now Alibaba and Baidu are launching their own services while Chinese regulators published draft rules for governing these powerful new tools.
"So, what is the threat? If we are being honest with ourselves, we don't yet know anything new that we didn't recognize as a threat before. Whether it's bias in the machines, poisoning of data models, data privacy and sovereignty, deep fakes, or code generation, these threats have already existed for years. Maybe it’s just that they have become more commonplace that pose the biggest threat. Is this to say that we should not do anything? No, AI is a powerful tool and, like most tools, can be used for good or for evil. Stan Lee probably said it best, “with great power comes great responsibility.“
And Dr. Vishal Sikka, CEO and founder of Vianai Systems, sees that strategy as directed, fundamentally, toward industry:
“The Biden Administration’s new actions to promote responsible AI reflect the urgent need for a transformative shift in the industry. There is great responsibility and care needed in developing and using AI. While it’s an incredibly powerful technology that can benefit our everyday lives, the risks associated with AI today cannot be understated: even users with the best of intentions could, with the right misstep, inadvertently cause harm with AI, so responsible AI development and deployment is critical.
"We need to bring the benefits of AI to users, in ways that are human-centered and designed to amplify human capabilities - not replace them, nor endanger them. Bringing the power of human understanding together with data and AI technology can help to build intelligent systems that significantly improve business outcomes and processes, as human feedback will naturally enhance AI performance and output, and vice versa.
"Ensuring humans are centered in the development, implementation, and use of AI tools paired with a robust framework for monitoring, diagnosing, improving and validating AI models will help to mitigate the risks and dangers inherent in these types of systems. These new actions from the Biden administration are very important steps in ensuring we are building trustworthy systems that are safe, reliable and amplify our humanity, with the power of AI.”