Twelve Golden Rules From The Kitchen For Effective Leadership: ‘Mise En Place’
We’ve often been taught that “faster is better” when hustling to get a job done, but during my time working in the catering business, I learned...
AI has woven itself into our daily lives, from news feeds to corporate slang, embedding itself into all interactions. This technology-driven behavioral change echoes the rise of social media, albeit on a scale we have not seen before.
The stark reality is that AI enables violations and manipulations on a scale we cannot imagine, making even the scandal involving Cambridge Analytica seem insignificant. Already, AI-powered fraud and scams are wreaking havoc across industries.
In 2023, deepfake-related fraud in fintech surged by about 700%, and government agencies have summoned banks to account for their handling of AI-enabled fraud, specifically voice mimicry attacks. Document fraud has become easy with generative AI tools, undermining identity verification systems. For every tool to verify a document’s authenticity, there are more (and better ones) out there to create passable fakes. Tasks that once required effort or specialized skills—like forging documents or crafting scams—are becoming push-button simple. Like there are innovative copilots for writing code, “fraud copilots” are emerging, threatening social programs and exposing them to a new tragedy of the commons.
The first wave of AI fraud is creating challenges, but the next will be more subversive. AI language models can be weaponized for personalized data harvesting and manipulation, eclipsing the capabilities of traditional social media misinformation we saw in years past. This makes it nearly impossible to detect, as it will be hidden in plain sight among us.
The threat isn’t about how people use AI agents, but rather how AI agents use people. This parallels Eliezer Yudkowsky’s “AI in a Box” experiment. In this thought experiment, Yudkowsky demonstrated that a sufficiently intelligent AI could persuade its human gatekeeper to release it despite being confined to a simulated environment with strict restrictions. The implications are profound: If an AI can manipulate a person into releasing it, imagine what else it could manipulate people to think or do.
While risks are evident, our approach to ensuring AI safety and trustworthiness remains inadequate. Unlike regulated industries such as healthcare or finance, AI development operates largely in a regulatory vacuum. There’s a lack of standardized basic transparency around how models are built, what data they use and how they address privacy.
When I provide information to a hospital, I trust they adhere to HIPAA regulations, ensuring my data’s safety and privacy. I also trust that all the medical systems they are linked with are HIPAA-compliant. With an AI assistant, I have no such assurance. How will my data be treated? Will it be used to target me in barely detectable ways? This concern goes beyond the data I provide. AI agents could use psychological manipulation/social engineering and sophisticated tactics to exploit our vulnerabilities, steering people toward actions that serve the AI owner’s objectives. Even small, targeted biases and misinformation can reshape opinions on policy or politics.
The Cambridge Analytica scandal demonstrated how easily personal data could be exploited at scale. With AI, targeting can happen in real time—continuously and with a feedback loop for precise success.
Addressing these challenges requires holding AI development to the same standards as all high-stakes sectors by establishing an ecosystem of trusted actors and bringing in regulation as an industry instead of relying on the government to step in. Independent bodies must evaluate AI tools for many factors. In terms of data use, they must be evaluated for privacy protection, data leakage and adherence to data handling standards. They must also evaluate for bias—not because there are agents without bias, but because knowing it in advance can ensure informed decision-making.
Just like in all other sectors, having these bodies certify agents and have a certification and compliance framework to ensure adherence and reliability of the certifications. Verifiable digital credentials for AI tools can enable trust, similar to SSL certificates for website security.
As AI continues to integrate into our daily lives, the risks associated with its misuse are becoming increasingly apparent. We must assume that AI systems can only be entirely relied upon with a robust trust framework. To reduce potential dangers, we must be cautious with the information we share and follow data security practices. We must act now to build a framework of accountability and trust, ensuring that AI remains a positive force. As AI adoption accelerates, creating authoritative bodies and trust systems is necessary to confront threats and turn potential challenges into opportunities for responsible innovation.
We’ve often been taught that “faster is better” when hustling to get a job done, but during my time working in the catering business, I learned...
1 min read
After 25 years with American Tower in a breadth of executive roles implementing digital infrastructure solutions to build a more connected mobile...
Your people are your most valuable asset and your biggest competitive advantage. Invest in them wisely, and they’ll take your company places you...