Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

In AI’s Wild West, Brand Safety Comes First

Talking to marketers at this point is rather like the Distracted Boyfriend meme: they’re watching AI walk down the street, as the Metaverse looks on.

AI has been every client’s only focus since ChatGPT came noisily to our attention. It’s ubiquitous, but new enough for everyone to be still figuring out their AI strategy and how they’ll adapt. While there are endless interesting potential applications for this technology, it’s fair to say no one has fully processed the level of risk that comes with it. Like any new technology, AI presents a new frontier of brand security.

To tackle it in the relative absence of any other collective guardrails, it’s down to clients and their agency partners to show restraint and self-police how we use AI to a greater extent than we’ve done with any other technology. That process of self-policing starts with a genuinely robust scrutiny of the tools that are brought in-house, coupled with due respect for the walled garden of information and data that exists in every business.

Will Quantum Computing Be Better for Your Business?

AI learns through data. The technology comes with many ethical and moral questions about how we use other people’s data – or any data at scale. We arguably don’t yet know enough about AI to ask the questions that will enable us to make good, brand-safe decisions. Sure, it’s tempting to be able to generate cool imagery faster using a learning model, but without knowing the source of the data used to create that model, the stakes are high.

Just how high is evident in the lawsuits we’re starting to see unfold, such as Getty’s [against London-based Stability AI], as well as embarrassing moments like the leak of sensitive information from Samsung employees via generative AI platform ChatGPT over something as seemingly benign as converting a meeting recording to notes.

Recommended: Top 15 AI Trends In 5G Technology

When even the father of AI, Geoffrey Hinton, quits, you worry you’ve already reached something of an Oppenheimer moment. So, how do you act responsibly using AI, while making the most of its potential?

Related Posts
1 of 6,862

With in-housing models like ours, the most potent element is the level of intimacy we can establish with clients. We’re sitting right next to them and that enables us to act as brand guardians to ensure that our clients aren’t on a platform that feels unsafe, or in a situation that puts them at reputational risk. This includes recommending AI-enabled tech that operates in a data-safe way; in effect, using data that belongs to the business, so that the business learns from itself.

What is incredible about AI is that a learning model can ingest everything there is to know about your brand, from how it talks about itself, what font it uses, and its visual identity, to how consumers typically react to it. That information is what makes a brand unique, and a learning model that knows that uniqueness better than you know it yourself can make a brand better, stronger, and more consistent.

Based on what we know performs within a business, we can use AI to write copy better. If we already know what makes an effective brief, we can create better briefs faster. We can even comp up assets for segmented advertising messages faster – all while using the information from inside the business.

In my company, we’ve developed an AI Council. This is made up of our legal team, our compliance team, and creatives – anyone who has a role in making sure that ours or our clients’ businesses are safe is across it. No one in the company is allowed to use any sort of technology without it going through the AI Council first, and 95% of all of our employees have already gone through training on AI compliance, just as they would go through diversity and inclusion training.

Top News: Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates

It’s important to state the Council isn’t there to stop or facilitate great work and better efficiencies. Nobody wants to get in the way of advancement, especially not in the relatively laissez-faire North American market. So the Council is born – and is run – in the belief that to get AI right in its infancy, we need to show restraint and self-police how we use this technology in the short term.

AI Councils like ours will likely spring up industry-wide, albeit in a splintered way as we figure things out. In the U.S. we tend to fall back on state-level legislation. Take California, which was the first state to adopt any sort of really substantive privacy rules – though it’s just one out of fifty. In the meantime, as stewards of our client’s businesses, it’s up to us to make sure they don’t find themselves on the wrong side of a lawsuit.

The in-house model keeps the agency and client working closely together and operating within that safe walled garden of information – if it’s your first-party information you’re drawing from, you’re in a safe space. That approach won’t be the approach forever; but at least in these early Wild West days of AI, it’s a productive start in navigating towards an intelligent AI-rich future.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.