Artificial Intelligence - TechHQ Technology and business Wed, 16 Aug 2023 14:09:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 AI recipe generator will leave you gassy https://techhq.com/2023/08/ai-recipe-generator-bleach-sandwich-new-zealand/ Wed, 16 Aug 2023 14:09:12 +0000 https://techhq.com/?p=227298

What’s for dinner? An AI recipe generator intended to help shoppers create meal plans, created by New Zealand supermarket chain Pak ‘n’ Save, caught customers’ attention when it suggested an Oreo vegetable stir fry. The supermarket experiment with generative AI – which, by the way, is everywhere nowadays – used ChatGPT-3.5 to power the Savey... Read more »

The post AI recipe generator will leave you gassy appeared first on TechHQ.

]]>

What’s for dinner? An AI recipe generator intended to help shoppers create meal plans, created by New Zealand supermarket chain Pak ‘n’ Save, caught customers’ attention when it suggested an Oreo vegetable stir fry.

The supermarket experiment with generative AI – which, by the way, is everywhere nowadays – used ChatGPT-3.5 to power the Savey Meal-bot that generated meal plans from customers’ leftovers.

After providing three or more ingredients, the bot would come up with a recipe. The concept isn’t unique: there are listicles aplenty toting the top ten AI recipe generators out there.

In a bid to be human, this AI recipe generator is unnaturally verbose.

After Savey Meal-bot’s odd concoction was shared on social media, customers began experimenting with the app. When a range of household items was added to the app, it really got cooking.

A recipe called “aromatic water mix” would create what the app describes as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses.” It would also create chlorine gas, which the app suggested you should “serve chilled and enjoy the refreshing fragrance.”

Via Liam Hehir’s Tweet.

New Zealand political commentator Liam Hehir posted the “recipe,” which has no disclaimer re: the dangers of chlorine gas, to Twitter prompting others to experiment and share their results.

Ah, a hearty lunch. We especially love the wisecracks.

A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”. The spokesperson for the supermarket, clearly, has never previously released software to end users. To paraphrase an old saying, “if you give an inch, they’ll take pleasure in using the inch in ways never expected or coded for.”

In a statement, they said that the supermarket would “keep fine tuning our controls” of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18. Pak ‘n’ Save lives in a world where there are no stupid (or playful) adults.

“You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot,” it said, and a new warning notice now appends the meal planner that the recipes aren’t reviewed by a human being.

Now, obviously someone who got given a recipe for “methanol bliss” – picture turpentine-flavored French toast – will have provided a set of ingredients that aren’t all food items and wouldn’t follow through on the recipe.

However, the wonder-machines that everyone is so keen to invest in and experiment with shouldn’t be taken at such face-value. The consumer-facing iteration of ChatGPT encourages users not to combine water, bleach and ammonia. Unless you use the “Grandma Exploit.”

There will always be hiccups with new technology (and probably with ant-poison and glue sandwiches, too), but the Savey Meal-bot points to a wider issue with the uptake of AI.

In the rush to adopt the new technology, proper testing isn’t carried out. Plus, generative AI is trained on such a vast amount of data that no human could read or oversee it in their lifetime. This, along with the fact that its answers are randomly generated, means it’s near impossible for programmers to anticipate problems.

This bot suggests you shouldn’t use *every* ingredient you have…

If everyone wasn’t so ready to embrace AI as an all-knowing overlord, and people weren’t so quick to accept robot orders (see: the number of drivers who followed GPS into large bodies of water) then perhaps the Savey Meal-bot would be a fun story.

Luckily, no one has been hurt. With AI being deployed in as many fields as possible though, it’s good to be reminded that it has flaws – that could be deadly. Even if only to idiots. We at Tech HQ look forward to a new category of Darwin Awards winners.

The post AI recipe generator will leave you gassy appeared first on TechHQ.

]]>
China changing its stance on facial recognition https://techhq.com/2023/08/why-is-china-is-changing-its-stance-on-facial-recognition-after-decades-of-surveillance/ Mon, 14 Aug 2023 10:38:29 +0000 https://techhq.com/?p=227232

A draft ruling includes directives not to use facial recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations. CAC also noted that facial recognition tech must be used only when non-biometric measures won’t do. The draft ruling is open for comment until September 7. Over the... Read more »

The post China changing its stance on facial recognition appeared first on TechHQ.

]]>
  • A draft ruling includes directives not to use facial recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.
  • CAC also noted that facial recognition tech must be used only when non-biometric measures won’t do.
  • The draft ruling is open for comment until September 7.

Over the last decade, we have witnessed the rise of surveillance states worldwide, yet, to date, no country is more surveilled than China. Under President Xi Jinping, the Chinese government has expanded domestic surveillance, putting the Eastern powerhouse at the forefront of global facial recognition technology for years. 

Dubbed the “global capital of surveillance,” China also saw the rise of a new generation of companies that make sophisticated technology at ever-lower prices. What’s worse is that Chinese companies were operating with less scrutiny and regard for corporate social responsibility than similar companies in other countries.

Facial recognition technology calls time on jaywalkers in China.

Go ahead – Jaywalk, we dare ya…

Today, the Chinese facial recognition system logs nearly every citizen, with a vast network of cameras nationwide. Every move in cities around China is being captured digitally. Not only is facial recognition software used to access office buildings, but it has also been used to snare criminals and even shame jaywalkers at busy intersections. 

Facial identity urveillance at Tiananmen Square Monument. Source: Shutterstock

Surveillance at Tiananmen Square Monument. Source: Shutterstock

The scope of the data collected by Chinese authorities became more apparent when the database of SenseNets Technology, a Shenzhen-based biometrics provider, was leaked in 2019, exposing the personal information of millions of people for months.

According to security researcher Victor Gevers, who found the database, SenseNets collected nearly 6.7 million GPS coordinates in one database. Within just 24 hours, SenseNets has data taken from cameras positioned around hotels, parks, tourism spots, and mosques, logging details on people as young as nine days old

The location data was matched to names — many of which were Uighur — as well as ID numbers, home addresses, photos, and employers, according to Gevers, who said he also discovered a large number of organizations were connecting to the database, including police stations, hotels, and various companies. Simply put, the database leak showed how pervasive China’s surveillance tools are.

China is finally drawing the line with facial recognition.

To put into context how heavy surveillance is in China, it is essential to know that the country has over 700 million surveillance cameras, according to online data. That means there is one lens for every two citizens. But now, China wants to create some boundaries and limit the use of facial recognition technology, finally.

On August 8, via the Cyberspace Administration of China (CAC), China released draft regulations to govern its facial recognition technology, including prohibitions on its use to analyze race or ethnicity. The purpose is to “regulate the application of face recognition technology, protect people’s rights to personal information and other personal and property rights, and maintain social order and public safety” as outlined by a smattering of data security, personal information, and network laws.

The news may come as a shock for many around the world, because China is notoriously known for its heavy surveillance nationwide. The draft rules, which are open for comments until September 7, include some vague directives not to use face recognition technology to disrupt social order, endanger national security, or infringe on the rights of individuals and organizations.

The internet regulator noted that the “Face Recognition Technology Application Safety Management Regulations (Draft for comment) is drafted according to existing laws and regulations such as the Network Security Law, Data Security Law, and the Personal Information Protection Law.”

The draft of the newest ruling says that “If there are non-biometric verification technologies for achieving a similar purpose or business requirements, those non-biometric verification methods should be preferred” (in Chinese, translated by Tech Wire Asia). Individual consent, however, isn’t required for certain administrative situations. Should facial recognition be used, the proposed rules encourage the use of national systems.

Image collection and personal identification equipment should be installed in public places to maintain public safety, the draft rules said, noting that clear signage is required. The draft also states that building managers will not need to use facial recognition to monitor entries and exits on the property – they must provide alternative measures of verifying a personal identity for those who want it.

Kids using facial identification before entering the turnstile gate. Source: Shutterstock

Kids using face identification before entering a turnstile gate. Source: Shutterstock

It also can’t be leaned into for “major personal interests” such as social assistance and real estate disposal. For that, manual verification of personal identity must be used, with facial recognition used only as an auxiliary means of verifying personal identity. Should there be a collection of images for internal management, it can only be done for a reasonably-sized area, the draft reads.

Businesses like hotels, banks, airports, and more should refrain from deploying facial recognition to verify personal identity. If the individual links their identity to the image, they should be informed verbally or in writing and provide consent. 

Collecting images is also prohibited in private spaces like hotel rooms, public bathrooms, and changing rooms. Lastly, all entities in China currently using the technology in a public space, or those with more than 10,000 facial recognition records stored, must register with their local internet regulator within 30 working days.

The post China changing its stance on facial recognition appeared first on TechHQ.

]]>
How to mobilize to deliver ethics to artificial intelligence https://techhq.com/2023/08/is-artificial-intelligence-with-ethics-possible/ Fri, 11 Aug 2023 22:03:08 +0000 https://techhq.com/?p=227215

• Artificial intelligence without ethics should be tackled like climate change. • International co-operation may well be necessary. • Individual nations may not have what it takes to get the job done. There has never been a technology that needs good ethics as much as artificial intelligence does. The consequences of getting this moment in... Read more »

The post How to mobilize to deliver ethics to artificial intelligence appeared first on TechHQ.

]]>

• Artificial intelligence without ethics should be tackled like climate change.
• International co-operation may well be necessary.
• Individual nations may not have what it takes to get the job done.

There has never been a technology that needs good ethics as much as artificial intelligence does. The consequences of getting this moment in techno-history wrong are disastrous. Not, as has been widely publicized, world-ending disastrous, but disastrous in terms of our ongoing understanding of – and striving towards – a more equitable world than we inherited.

In Part 1 of this article, we talked with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of web platform OmniIndex, who aim to establish ethics for artificial intelligence, about what “good ethics” might look like for artificial intelligence. 

In Part 2, Richard and Simon explained the potential consequences of failing to give artificial intelligence some ethics, irrespective of the complexity of the process of establishing absolute harms as a basis for what those ethics should look like.

That left us with one fairly massive question.

THQ:

So how do we give our artificial intelligence ethics? Genuine question – it’s out there already, doing a thousand different jobs. How do we teach it to be progressive technology (without unnecessarily overstepping boundaries of ethical difference)?

RF-F:
By next year.

THQ:

Excuse us?

RF-F:

We’ve got to do that by next year, otherwise it’s going to screw with the elections in the US.

THQ:
Oh. Yeah. Everybody that knows about this technology seems to be deeply worried about exactly that. “The AI election,” as they call it.

SB:
And they’re right to worry. If you look at human nature, it has always been tribal. We’re a tribe group. We like our own tribe. Anybody comes near our tribe, we’ll throw something at them to try and get rid of them.

The problem the internet has caused over the last 20 years is that it’s made us more tribal, with people only reading and viewing within their own grouping. When you have AI pushing more and more of that information across, that gets exacerbated. And this is where I think generative AI can be very damaging.

The only way you can stop that is to make sure that people within politics don’t use the tools to push themselves. But that’s an impossible ask.

Does your artificial intelligence policy work without ethics?

AI. Ethics. National politicians. No, that can’t posibly work.

THQ:
Yeah, what’s that line? A lie goes around the world while the truth is still getting its shoes on? In terms of the election and generative AI, if you have entirely believable video footage of something that is still in reality a lie, broadcast to an echo-chambered public operating on concentrated confirmation bias, then you have no chance of fighting against that. You’re relying on, as you say, the people who are highly invested in a specific result to be moral enough human beings to not misuse the technology.

And at that point you’re putting a hell of a lot of faith in human beings.

SB:
You are. But I think that’ll be quite a short term effect – one or two elections – before the majority of people who are sane and who do listen to both sides will suddenly react and come up. That middle ground has always been there, they’ve just been hidden and have been pushed away.

And I think for the next couple of elections that may be the case. Then all of a sudden when people realize that the people on either extreme are using these tools, that centrist ground will say “Hold on a minute, this is wrong, let’s actually go out and do something about it.”

The how of ethics and artificial intelligence.

THQ:

So, while we wait for the great centrist revolution, we come back to the question of how we give our artificial intelligence ethics.

RF-F:
Well, we have great depth of understanding. We don’t have the breadth of ability to combat this yet. The breadth is where you need the different perspectives from around the world and different people to be able to say “Have you asked this question? What would happen if…?”

That’s very much doable from the implementation perspective. The deployment perspective of AI is a little more complex in terms of the model building the data, but we can start there. And that means you put together external AI audits, you have multi-stakeholder feedback in there. And I would venture from two sides, one from appointed groups that are very well educated but have diversity across the group, say 10-20 people looking at this.

And then I think you’ve got to go out to hundreds, if not thousands of people and have them look at this and share thoughts that companies will never otherwise access in a million years because of the culture and the way that organizations have to be created.

THQ:
How realistic do we think that is, given companies’ perceived need to not be “unnecessarily” audited or unnecessarily criticized.

The who of ethics and artificial intelligence.

SB:
My own view is that this has to come out of the UN. We have a whole range of agreements in the UN which are bilateral across the world. Whether it be to do with embryonic research, whether it’s to do with nuclear disarmament. There’s a whole range of things that governments agree to. It’s very difficult to get the agreement, but governments do agree to them.

Could the UN be the power we need to apply ethics to artificial intelligence?

An international power to tackle an international problem? Call the UN!

If we have a similar understanding about AI, and if we put it in the same bracket as other tools such as embryonic research, which then takes it away from a single organization or handful of organizations (because organizations have to make money, first and foremost), we can say “Right, you can’t go beyond this line,” and then make sure that all other organizations audit that as well.

I think it can be self-managing within guidelines from the UN. It has been done before and I think it can be done again. What I think is really bad are policymakers being driven by individuals and organizations who have their own fixed agenda. We need to make sure that they do not become the sole voice here.

THQ:
So what are the practicalities of how we get it done? If we get it done via the UN, that’s great. But that depends on political buy-in more than anything else to raise that agenda, yes? Meanwhile, the polling shows a majority of Americans are preparing to vote for Donald Trump, who was famous for pulling out of international accords and threatened to pull the US out of NATO, and the UK government has pulled itself out of the EU, is exploring ways to pull itself out of the European Convention on Human Rights, so it can behave in ways that are currently against that convention.

So why do they think they’d be enthusiastic about adding any power to the UN to make rules over anybody operating generative AI?

RF-F:
It’s like smoking really, isn’t it? It’s not clear what the harms are to anybody in the short term by doing this. And technology companies are brilliant at providing this ease and convenience that we love and now sort of depend on.

I think there’s definitely a role, as Simon says, for the UN, because I think there is so much bias from each country and each corporation, too. We aren’t going to see wholesale change until we see a wholesale change in business, because this linear economy can’t carry on. So we know that talking about this in isolation, we do get a bit stumped, but when we talk about it alongside climate change and equality and diversity, you start to see a picture emerging that needs to pull this in for sure, and hopefully will be understood as part of it. 

No artificial intelligence without ethics allowed.

No artificial intelligence without ethics allowed.

THQ:

Just a gentle reminder – there are plenty of people I both the US and Europe who believe climate change is a hoax, and Net Zero is an unnecessary con on the working class.

RF-F:
One of the key things to recognize is that mental harms and physical harms need to be put on the same page in the same place. We would never have allowed something with the power of ChatGPT be deployed in pharmaceuticals or food or anything like that, but because with generative AI, all the potential harms are mental, we don’t take it anything like as seriously. But the harms are immense. We need to take mental health more seriously and then you start to see the ramifications and you can legislate against them.

THQ:
That’s the point, isn’t it? It’s all intertwined and interfolded with other things and other elements of real life. The idea that we can solve x-part of this, while the rest of it is still a swirling mass of weirdness, suggests we need to go right back to basics and build up from there.

You’re going to tell us that’s too dystopian a view, aren’t you?

SB:
Yeah, it is dystopian. I think we have to go back to history and have a look through the many things that were going to kill the world, whether it be the printing press, the television, the VCR, whatever it happens to be. They’ve all had their time of notoriety, but society has a tendency in the end to manage those things.

Where I would be concerned is to have a national government try to manage it, because a national government is… just not very good at managing things. So I come back to the UN.

It has some power, and then it’s down to individual governments to try and make the system work. This is an international issue, so it needs an international solution. Right now, the best hope we have is the UN.

How the UN works #101.

The post How to mobilize to deliver ethics to artificial intelligence appeared first on TechHQ.

]]>
AI plagiarism: Z library no longer the biggest battle authors face https://techhq.com/2023/08/ai-plagiarism-z-library-what-is-biggest-battle-for-authors/ Fri, 11 Aug 2023 19:06:04 +0000 https://techhq.com/?p=227190

• Can AI commit electric plagiarism? • Can use of generative AI to approximate a writer’s style be considered as plagiarism? • Would authors prefer to be pirated than plagiarized? AI plagiarism might be about to take Z library’s place as the plague of the literary world. Earlier this year, when Z library was still... Read more »

The post AI plagiarism: Z library no longer the biggest battle authors face appeared first on TechHQ.

]]>

• Can AI commit electric plagiarism?
• Can use of generative AI to approximate a writer’s style be considered as plagiarism?
• Would authors prefer to be pirated than plagiarized?

AI plagiarism might be about to take Z library’s place as the plague of the literary world.

Earlier this year, when Z library was still making headlines, authors were united in the battle against piracy. Particularly vocal was the Authors Guild, which recently came to the defence of Professor Jane Friedman, an author who specializes in helping other writers get published.

She took to the-platform-formerly-known-as-Twitter to decry Amazon after she discovered that books she didn’t write were being attributed to her.

Friedman’s tweet and article detailing the issue.

Over the last 25 years, Friedman has written or contributed to 10 books on the industry. However, she hasn’t published anything new since 2018. When a reader reached out to her about more recent works, alarm bells rang.

Titles of what Friedman called “garbage books” included Your Guide to Writing a Bestseller eBook on Amazon, Publishing Power: Navigating Amazon’s Kindle Direct Publishing, and Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon.

Friedman’s solid following is based on her work which includes similar titles like The Business of Being a Writer, What Editors Do, and Publishing 101.

When she contacted Amazon to get the faked books removed, the e-commerce giant refused, even though they were being traded on the basis of her name and reputation.

Because Friedman didn’t hold a trademark to her own name, she couldn’t get a straightforward copyright infringement case.  When she filed a report with Amazon, its response was to ask for “any trademark registration numbers that relate to your claim.”

In an article from Plagiarism Today, which Friedman retweeted, it’s pointed out that historically, authors have had two key battles: piracy and plagiarism. Both of these fall under copyright law – which is why Z library has been involved in legal proceedings.

However, a third issue has arisen out of the advent on generative AI. What Friedman is fighting can be described as “reverse plagiarism.”

Can authors copyright their style to avoid AI plagiarism?

Plagiarism? Or identity theft for profit?

Amazon’s stance was that unless the books copy text or other protectable elements from Friedman’s work, copyright doesn’t apply to these cases. Even though the books are likely AI-generated works based on Friedman’s content, there’s nothing to sustain a copyright infringement claim.

“We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek told Decrypt by email. “We welcome author feedback and work directly with authors to address any issues they raise, and where we have made an error, we correct it.”

Online, other authors shared similar stories. The Authors Guild also showed support.

The Authors Guild response to Friedman’s original Tweet.

Also speaking to Decrypt, the organization said, “We’ve worked with Amazon on this issue in the past, and we will continue our conversations with it about advancing its efforts to keep up with the technology.”

“Meanwhile, we encourage everyone to report these books that try to profit from your brand through Amazon’s complaint portal.”

The global infatuation with AI has been a sticking point for writers this year already. Film and television productions ground to a halt when the members of the WGA went on strike after negotiations with the Alliance of Motion Picture and Television Producers collapsed in May. When 160,000 members of SAG-AFTRA also went on strike, their concerns were similar.

The capabilities of AI have caused many to wonder whether there’ll be jobs available to humans once its full capabilities are harnessed. For writers like Friedman, a future in which AI replaces people is fast becoming a reality.

Friedman wrote that she thinks “her” books were AI generated because “I’ve used these AI tools extensively to test how well they can reproduce my knowledge. I also do a lot of vanity prompting, like “What would Jane Friedman say about building author platform?”

In July, 10,000 members of the Authors Guild co-signed a letter calling on AI industry leaders to obtain consent from, credit, and fairly compensate authors.

The Guild also submitted written testimony to the Senate Intellectual Property Subcommittee for its July 12 hearing on artificial intelligence, underscoring the threat to “the written profession from unregulated use of generative AI technologies that can produce stories, books, and other text-based works and displace the works of human authors in the marketplace.”

Ironically, given the firm stance that many authors took against Z library’s provision of free books, Friedman’s take is that she “would rather see [her] books get pirated than this.”

Jane Friedman. Source: janefriedman.com

The attention that Friedman’s tweet got meant that Amazon removed the books from its site. They’re now listed as not available to buy.

The fake books are still on Goodreads under Friedman’s name though, increasing her concern that AI generated work is going to ruin the credibility of authors’ real work.

The question of whether use of generative AI to mimic an author’s style counts as plagiarism has been debated by colleges and schools that worried students would use tools like ChatGPT to write assignments.

Further, because the Large Language Models that generative AI bots run on are trained on existing work, there’s an argument to be made that any content created by AI is, on some level, plagiarized. That’s an argument familiar from the world of art, where generative AI programs commonly take elements of human work and repurpose them without credit, premission, or remuneration.

Maya Shanbhag Lang, president of the Authors Guild, said, “The output of AI will always be derivative in nature. AI regurgitates what it takes in, which is the work of human writers. It’s only fair that authors be compensated for having ‘fed’ AI and continuing to inform its evolution. Our work cannot be used without consent, credit, and compensation. All three are a must.”

For now, Friedman is focusing on what she can control: her own writing. She said she’s revisiting her book the Business of Being a Writer.

 “At least now I will have a good story to include.”

Plagiarism – it’s not new, but it is getting cleverer.

The post AI plagiarism: Z library no longer the biggest battle authors face appeared first on TechHQ.

]]>
US-China trade war: New executive order, same old mistakes? https://techhq.com/2023/08/us-china-trade-war-is-new-executive-order-just-same-old-mistake/ Fri, 11 Aug 2023 14:45:38 +0000 https://techhq.com/?p=227170

The US President is escalating the tech trade war with China with a new executive order that’ll come into effect next year. The order declares a national emergency, directing the Treasury Department to establish a program to oversee a new instrument to review outbound investments in national critical sectors. The President continues to treat China... Read more »

The post US-China trade war: New executive order, same old mistakes? appeared first on TechHQ.

]]>
  • The US President is escalating the tech trade war with China with a new executive order that’ll come into effect next year.
  • The order declares a national emergency, directing the Treasury Department to establish a program to oversee a new instrument to review outbound investments in national critical sectors.
  • The President continues to treat China as an active danger, and penalize it as such.

The United States is still dealing with the unintended consequences of the first export controls imposed against China last October. It was the most far-reaching action taken by the Biden Administration, eventually leading to the escalation of the US-China trade war – so much so that China has not shied away from responding to the US measures that have followed.

The US and China briefly turned down the heat on their relationship when Treasury Secretary Janet Yellen and Secretary of State Antony Blinken visited Beijing recently, partly to improve communication between the two countries. “President Biden and I do not see the relationship between the US and China through the frame of great-power conflict,” Yellen said at the end of her trip.

US Treasury Secretary Janet Yellen tried to play down talks of a US-China tech trade war during a press conference at the Beijing American Center of the US Embassy in Beijing on July 9, 2023. (Photo by Pedro PARDO / AFP)

US Treasury Secretary Janet Yellen speaks during a press conference at the Beijing American Center of the US Embassy in Beijing on July 9, 2023. (Photo by Pedro PARDO / AFP)

Unfortunately, the reality is far from the harmony the official meetings were trying to paint: the US and China are still engaged in a great-power struggle, actively competing for global supremacy. This week, the US intensified its trade war with China by announcing a new investment screening mechanism. This time, however, China isn’t the sole target. Its special administrative regions, such as Hong Kong and Macau, were included too.

The three countries were noted as the only points of concern in what Biden dubbed the ’emergency declaration.’ For context, the President declared the latest move as “a national emergency to deal with the threat of advancement by countries of concern in sensitive technologies and products critical to the military, intelligence, surveillance, or cyber-enabled capabilities of such countries.”

All about the latest executive order in the US-China trade war

On August 9, this week, Biden signed an executive order to narrowly prohibit certain US investments in sensitive technology in China and require government notification of funding in other tech sectors. Ironically, the announcement came on the first anniversary of Biden signing the Chips and Science Act into law.

But the order didn’t come as a surprise – it was long-anticipated. This time, it is intended to curb US venture capital and private equity investments in Chinese companies covering semiconductors and microelectronics, quantum information technologies, and specific artificial intelligence (AI) systems.

In a letter to Congress, Biden declared a national emergency to deal with the threat of advancement by countries like China “in sensitive technologies and products critical to the military, intelligence, surveillance, or cyber-enabled capabilities.” Therefore, the order also called for the creation of an outbound investment review mechanism.

The move is being made mainly because the export controls unveiled last October by the US “don’t include investments abroad that can help foreign adversaries or countries of concern to fuel indigenous development of national security technologies,” an administration official said, according to the South China Morning Post

“By adding outbound investment screening to our suite of national security tools, we’re enhancing US capabilities to safeguard our national security,” the official added. However, unlike most past orders or bans, the latest move also seeks to blunt China’s ability to use US investments in its technology companies to upgrade its military, while preserving broader levels of trade that are vital for both nations’ economies.

The US is being cautious this time

This time, administration officials, including Commerce Secretary Gina Raimondo and Treasury Secretary Janet Yellen, have said the US seeks to keep the scope of the new investment restrictions as narrow as possible to limit the damage to the bilateral relationship. The US wants to avoid worsening the trade war with China.

Secretary of State Blinken sought to de-escalate the US China trade war.

Secretary of State Blinken sought to de-escalate the US China trade war on his rcent visit. Source: Leah Millis/Pool/AFP.

“You don’t want the cutline to be so broad that you deny American companies revenue and China can get the products elsewhere, or China gets products from other countries, so what we’re trying to do is be narrowly defined [and] work with our allies on these choke point technologies,” Raimondo said last month.

Unfortunately, China did not perceive the move as positively as the US did.

Following the announcement, a spokesperson for the Chinese embassy in Washington said that China is “very disappointed” by the move. In a statement, Liu Pengyu said the curbs would “seriously undermine the interests of Chinese and American companies and investors” and added: “China will closely follow the situation and firmly safeguard our rights and interests.”

Meanwhile, China’s commerce ministry in Beijing accused the US of disrupting global industry and supply chains. The executive order “seriously deviates from the market economy and fair competition principles the US has always promoted, and affects companies’ normal operation decisions,” a spokesperson said.

The order is expected to be implemented next year, according to someone who was briefed on the issue, after multiple rounds of public comment, including an initial 45-day comment period. Emily Benson of the Center for Strategic and International Studies (CSIS), a bipartisan policy research organization, said the move by the US signals a seismic broadening of the US trade, investment, and technology toolkit that reflects a gap in existing government authorities. 

“This begs an obvious question about why the US lacks authority to review outbound investments in countries of concern for certain end uses that pose national security threats,” she noted. In other words, Benson believes there is a conspicuous missing piece in the ability of the US government to ensure that US capital—both funding and know-how—is not used to advance foreign military capabilities. 

“The August 9 executive order thus stands up the scaffolding for a system to close this gap,” she summarized. The takeaway, for now, is that the latest order creates an opportunity for the administration to articulate even more clearly to skeptics that these investments pose a national security risk, thus meriting a new review regime.

Can Biden not stick to the script in the US China trade war?

If there’s a script for non-confrontation with China, can President Biden stick to it?

This considers the hard lessons the US government has learned after the October 7 export controls, including that allies and companies need to be more adequately briefed on the underlying national security justifications for the controls. 

 

The post US-China trade war: New executive order, same old mistakes? appeared first on TechHQ.

]]>
What are the risks of artificial intelligence without ethics? https://techhq.com/2023/08/what-happens-if-artificial-intelligence-evolves-with-no-ethics/ Thu, 10 Aug 2023 19:04:33 +0000 https://techhq.com/?p=227158

• Unless it’s trained, the ethics of artificial intelligence are a synthesis of the internet. • The internet was never designed to teach ethics to artificial intelligence. • Without ethics, the worldwide take-up of artificial intelligence will make inequality stronger and more divisive. It’s becoming increasingly clear the more we add generative AI into our... Read more »

The post What are the risks of artificial intelligence without ethics? appeared first on TechHQ.

]]>

• Unless it’s trained, the ethics of artificial intelligence are a synthesis of the internet.
• The internet was never designed to teach ethics to artificial intelligence.
• Without ethics, the worldwide take-up of artificial intelligence will make inequality stronger and more divisive.

It’s becoming increasingly clear the more we add generative AI into our business systems that artificial intelligence, an entirely machine-based system, needs some code of ethics.

But adding ethics to artificial intelligence is not in any sense as easy as it might sound.

In Part 1 of this article, we talked with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of web platform OmniIndex, both of whom are fighting to get this done, about the complexity of thinking we understand what “good ethics” – and especially “bias-free ethics” – might look like.

That’s especially difficult, given that we have to solve for the naturally-grown ethical biases of the people training the AI (most of whom will probably be white, westernized men), as well as the ethics of the company, and a country, and arrive at an ethical model that can be applied worldwide to artificial intelligence technology, which has nothing beyond the internet as a whole, and what we teach it of our concept of ethics, to guide it.

THQ:
We were just talking about the tricky business of moralistic data. As you say, whose morals are we talking about? How certain are we of the ground on which we’re playing here?

RF-F:
Yeah. There are some absolute morals that we can all share, and others we need to be very careful of.

SB:
I think we need to break it down, too. We have, say, Chat GPT Global, which has all the internet data in it, and as Richard said, that’s probably 60, 70% US-based. But then you’ve got ChatGPT as a private sandbox system for an organization. Now their data is purely for them. There are going to be biases in that data, obviously, but it is a much smaller dataset and therefore is much less likely to have some of those ethical problems, being that it’s based on an organization’s business model.

If they start bringing in external data, then we’ve got problems. I think we need to differentiate what we’re talking about here. Are we talking about AI leading the world and every answer coming specifically from the internet? God forbid, we all know how good Wikipedia is. Or are we talking about artificial intelligence, generative AI in this instance, being a subset of data for an industry to get answers for a specific industry within their organizational data? 

I think we need to be careful to differentiate the cases.

Can artifical intelligence systems cope with ethics?

THQ:

Oh, definitely. But is there also a degree to which the number and the extent of the harms that can be done are lowered simply by the scale crunching down to individual industries, individual companies? Or do we just know there’s something wrong within the system?

SB:
I think the headlines are just saying there’s something wrong within the system.

My own view is that a lot of the hype we’ve been having recently that “AI is going to blow up the world” and all the rest of it is the best marketing and PR pitch I’ve ever seen.

The IT industry is pretty good at PR and marketing, but both OpenAI and Google, through Bard and Microsoft played a blinder when they said AI could blow up the world. Brilliant. That’s got everybody talking about it.

It’s the biggest load of BS I’ve ever heard, but it does have an awful lot of people talking about it, which is exactly what they wanted. Because what you’ve got to remember about the reason Microsoft put so much money into OpenAI and the reason Google has got Bard, is not for the greater good.

It’s to sell you more advertising. And the reason Google came out with Bard so quickly, and I’m pretty certain they didn’t want to, was because Microsoft had a lead on them. When it comes to advertising inside their browser, we have to remember what the uses of those particular tools were.

I mean, Google’s just announced that the Gemini Project, which is actually powered by DeepMind, a true AI application, is used in the National Health Service in the UK as well as elsewhere. But we have to be careful again of what it is we’re actually looking at and why these systems came about in the first place. Because it wasn’t to help us, it was to make revenue.

THQ:
That’s all absolutely true. But the point is that whatever their initial purpose was, they’ve been taken up and taken across the board and very soon they’re going to be in more or less everything. So they quickly outgrow that initial purpose while still fulfilling it. And so it becomes a bigger thing to deal with.

Also, of course, they’ve got at least 100 years of science fiction to help them in the idea that “the machines are going to kill us.”

So what is the scale of this issue, Richard?

RF-F:
What ChatGPT and the others have done is answer the question, “What would it be like if I could chat with the internet?”

That means the scale of the problem is significant because no matter what the data is doing, there has to be this layer of interpretation attached to that. So you’ve got Google, for example, and somebody types in “CEO” and then presses “Image Search.” What do you want them to show?

Do you want them to represent the data, which maybe looks like 90% white men over here in this part of the world? Or do you want them not to be representative of the data as it exists, and show a variety of people?

That’s just an example of the choice they have to make. What do you want to show? An unpleasant but accurate picture, or an aspiration but inaccurate one?

That’s a tough one, right?

The ethical issues of using the internet to train artificial intelligence.

Say the three of us developed a dating app and were selling it to users for $20 a month. And then we have a meeting and say “Hey, look at all this data we’ve got. We should sell the data and make even more money!”

But now we’re selling data into aggregators that were never intended for that purpose. So we’ve got GPS data, we’ve got timestamps of data of when people sent messages and how many times and so on. And yes, that’s extremely powerful and useful. That’s why Meta bought WhatsApp: it doesn’t read the messages, but the metadata is worth a fortune. But it wasn’t intended for that purpose.

Our dating app now is producing data that we never intended to, and that’s the situation we’ve got with the entire internet. 

It was never intended to be training data for this kind of artificial intelligence, let alone to try to teach it ethics.

So the problem exists on a global scale.

THQ:
That’s almost as scary as the “Artificial intelligence will burn the world” headlines. Only slightly more intellectual and real.

So what happens if we don’t address this? As you say, it’s not going to blow up the world, but in what ways will it negatively affect the nature of society as we understand it now? 

RF-F:
It’s cat and mouse, isn’t it? That’s the problem – you start to lose track of what’s reaction to the world and what’s been created in the world and you can no longer really understand sources or truth or where things have come from and who’s written them and why they’ve been written.

For instance, we’ve always had very clever marketing people and campaigns, but we’ve known the purpose. You look up at the billboard and there’s a bunch of young, beautiful people drinking Diet Coke. It’s obvious – you’ll be more popular if you drink Diet Coke. It’s a dumb message, but we get it. They want us to go and buy Diet Coke.

THQ:

*Pops can.* Sorry, do continue.

RF-F:

Then we get into the world of social media algorithms, AI, and large language models, and we have no idea what’s what – no idea of motivation, or response, or outcome. So there is a complete inequality of understanding between the user and what they’re putting in, what’s done with that, what the implications are of that. 

The calls for ethics in artificial intelligence are growing.

The calls for ethics in artificial intelligence are growing.

SB:
History is written by the victors. It’s never written by those who supposedly lost. For instance, take Charles Babbage. Absolutely brilliant, man. But did he invent the engine? Or did his supposed sidekick, Ada Lovelace? Well, probably she did, but he was the one who wrote the paper.

What we’re doing or what we might end up doing with generative AI and artificial intelligence of this nature is rebuilding all of those prejudices and enhancing them, which means that in 20 years’ time, 30 years’ time, instead of having equality in the marketplace and the workplace, we’ll have an even larger amount of inequality. Of patriarchy. Of white privilege. Of heteronormativity.

We’ve spent 20 to 30 years trying to make the world a slightly less prejudicial place.

We haven’t done a very good job of it, from what I can see. But we are in deep danger of knocking it backwards because of the inbuilt prejudices and everything that’s been written on the internet. If we’re using these tools to make decisions, they’re going to make those decisions based on what they know.

And what they know may not be the truth.

THQ:

In fact, it’s vastly unlikely to be the truth.

SB:

Exactly. Look at England in 1066. Did King Harold get an arrow through his eye? It’s more likely that he disappeared and went hiding for a little bit before going across to France or wherever. But he didn’t create the tapestry, the accepted record of events.

THQ:
Dear gods, we’ve just realized. As far as artificial intelligence is concerned, the internet is the tapestry of record, and the tapestry of accepted ethics. Come to that, the “Bayeux Tapestry”… isn’t even, actually, a tapestry. It’s an embroidery.

But of course, the arrow in the eye makes for a much better human narrative. The fact that it’s almost certainly not true has always been seen as somehow less relevant than the quality of the story.

Artificial intelligence without ethics is likely to favor quantity of data over quality - just like the Bayeux Tapestry.

History – it’s all fun and games till someone “loses an eye.” #WinkyFace

SB:
Yeah, exactly. And that’s what we’re now taught.

THQ:
So it won’t destroy the world, but it might make the world we think we know unrecognizable to future generations?

SB:
That might be a little bit too exaggerated, but it’s going to make it harder for equality to take place, because we’re building on a state of inequality to begin with, and we’re teaching those models with an unequal dataset. And as soon as you have that, then it gets built in and propagated out. As Rich said earlier, it’s so much more data now, so many more decisions.

Artificial intelligence will be a mirror of our own - with or without ethics.

What will our newest magic mirror show us about our society? That depends on whether we teach it ethics.

THQ:
The big idea in science fact and science fiction both is that technology is a mirror of the society that creates it. So the question is how do we solve this problem of teaching bias and flawed ethics to artificial intelligence, without tearing down and fixing the society we know?

 

In Part 3 of this article… we’ll find out.

Frankenstein taught us that created systems will emulate their creators, so leaving them without ethics is probably an enormously bad idea.

The post What are the risks of artificial intelligence without ethics? appeared first on TechHQ.

]]>
4 terrifying dangers lurking in AI https://techhq.com/2023/08/what-are-the-dangers-of-ai/ Wed, 09 Aug 2023 22:13:46 +0000 https://techhq.com/?p=227102

Now that AI is well and truly embedded into the collective consciousness, it’s time that we, as technologists, parse some of the real and imagined ‘dangers’ lurking in the technology. For the purposes of argument, let’s first assume that AI, in the common parlance, is equated with machine learning (ML), and in the public perception,... Read more »

The post 4 terrifying dangers lurking in AI appeared first on TechHQ.

]]>

Now that AI is well and truly embedded into the collective consciousness, it’s time that we, as technologists, parse some of the real and imagined ‘dangers’ lurking in the technology.

For the purposes of argument, let’s first assume that AI, in the common parlance, is equated with machine learning (ML), and in the public perception, at least, LLMs (large language models).

To understand AI, we must have at least a cursory grasp of how the technology works. Many commentators feel fit to pass judgment on the implications of AI without actually understanding the basics of what goes on under the hood. In that, there’s nothing wrong per se: plenty of professional car enthusiasts out there, for instance, wouldn’t know their crankshaft from their big end. But a grasp of the processes involved in producing a recognizable AI, specifically, an LLM, helps explain how and why certain dangers exist.

Machine learning models of any type need a body of data from which to learn. A large quantity of data is generally considered better than a small one, and clean data is usually preferred. Clean data exhibits as few anomalies as possible in its structure (so all international ZIP codes should be made to follow the same format, for example) and in its content, too. Bodies of information fed to an AI that state too often that the world is flat will influence the model’s perceptions of what shape the world is. This example neatly brings us to our first deadly danger:

AI is biased

It’s accepted wisdom that any body of data will contain outliers – snippets of information that are well off the beaten track compared to their peers. Among a list of popular religions, for example, there will be one or two latter-day wits that claim to follow the ways of the Jedi Knights. A smart AI algorithm can cope with outliers and not adjust its comprehension to an inappropriate degree. However, if the body of information given for learning is inherently biased, in the main, then the “taught machine” exhibits the same attitude.

Large parts of the internet, for example, are dominated by young, Western men interested in computing. Sampling data from there would lead any learning algorithm to believe there are few women, few old people, and few people with so little disposable income they couldn’t afford the latest technology. In the context of the learning corpus, that may be true. In a wider context, not so.

Therefore, any learned picture of the world drawn from the internet reflects the inherent bias of the personalities present on the internet.

Inaccuracy

Machine learning algorithms will harvest data that presents a biased picture, and extrapolated conclusions requested by end-users querying Bing’s AI, for example, will reflect that. It may present conclusions of the ‘fact’ that young American males of color have strong criminal tendencies. That’s not because of any truth in that finding; it’s because a political system has incarcerated that demographic to an extraordinary degree.

Large language models are created by a complicated, statistically variable word-guessing game. OpenAI’s ChatGPT, for example, has learned to communicate by compiling sentences from lists of words, one after another, based on what the next word is fairly likely to be.

This process can lead to AI “dreams,” beloved by the mainstream press. Once anomalies creep into the real-time guesswork of what word comes next, errors that form surreal imagery compound, creating streams of consciousness that amuse and confound in equal measure.

Donald Trump's output retweeted many times creates a danger from AI scraping to learning bodies.

Copyright or license infringement

Creative works or everyday internet postings are released under some degree of stricture, deliberately by the author or from those given by a proxy. The contents of Twitter (or X), for example, are owned by the company running that platform. Pictures taken from a high school reunion on Facebook (Meta) are owned by Mark Zuckerberg. And computer code written under a deliberately chosen license (the GPL, for example) has similarly to be reused or represented in a particular way.

When MLs are presented with raw data, however, it’s not clear whether or not any licensing strictures are observed. Does OpenAI grab copyright material to learn its language? Does Bing Image Creator take copyright imagery to learn how to paint? And if the greedy silicon digestive systems then spout, in part or whole, material that was released restrictively, where does the end-user stand in the eyes of the law?

Like the legal complications of liability in the event of a crashed autonomous vehicle, the new paradigm is unexplored territory, morally and legally. Authors, artists, and programmers may protest their work is put to uses it was never designed for, but the internet age’s adage of ‘be careful what you post’ is especially relevant now.

Even if creators somehow flag their output as ‘not to be used by learning models’, will the large operators of those models respect their choices? Like the “do not follow” entries in a website’s robots.txt file; it’s debatable whether any individual’s wishes are respected.

Mediocrity

From the early days of computing, data’s veracity was always doubtable. GIGO (garbage in, garbage out) remains a cornerstone of data analysis. In 2023, media companies began to use LLMs as content producers for various purposes: item descriptions in large online stores, reports on financial markets, and articles that contain perfect keyword densities to produce optimized SERP (search engine results page) placement.

And because the LLMs continue to snapshot the internet as new learning corpora, there is a significant danger of a spiral of self-propagation. Artificial intelligences will begin creating new generations of learned ‘facts’ that were themselves produced by AIs.

Ask a large language model to explain, for example, mental health law in Canada. The results will be coherent and comprise readable paragraphs and use bullet-point summaries of key information. The choice of bullet points comes not from the importance of any bullet-ed statement but from the fact that years of SEO practise have stipulated that bullet point lists are a good way to create web content that will rank well on Google.

When that information is copied & pasted into new articles and then absorbed in time by LLM spiders crawling the web, the decision to use bullet points becomes reinforced. The information in each snappy highlighted sentence gains extra emphasis – after all, to all intents and purposes, the author felt fit to highlight their statement in this way. It’s easy to see the dilution of importance by repetition, as evolving LLM models merely repeat and refine emphasis that was never particularly justified.

One of the dangers of artificial intelligence is the spreading of the average.

“DEMOTIVATIONAL POSTER: Government Contracting – It’s easy to stay afloat when you’re swimming in a sea of mediocrity.” by Claire CJS is licensed under CC BY-NC-SA 2.0.

Over the years, average humans will produce average content consumed and averaged out by LLMs, producing even less remarkable content for the next generation of OpenAI-like companies to consume. Mediocrity becomes the norm.

Brilliant art, amazing writing, and earth-changing computer code can be produced by talented people, only to be subsumed in a morass of “meh” and regarded only as an outlier and disregarded by algorithms trained to ignore or at least tone down extraordinary content. There’s no consideration of value, merely distance from the average as a measure of worth.

Perhaps in that, there is a gleam of hope. If machine learning’s output is merely passing fair, genuine creativity will surely stand out. Until some very clever people quantify the muse and write algorithms that easily out-create the human creators.

The post 4 terrifying dangers lurking in AI appeared first on TechHQ.

]]>
Can artificial intelligence have ethics? https://techhq.com/2023/08/how-can-artificial-intelligence-have-ethics/ Wed, 09 Aug 2023 21:38:45 +0000 https://techhq.com/?p=227110

• Artificial intelligence has no inherent ethics. • The task of defining ethics for artificial intelligence is complex. • Even within one culture, there are many ethical standards. Generative AI has been both brilliant and controversial since it exploded across the world late in 2022. But one of the main concerns around its use is... Read more »

The post Can artificial intelligence have ethics? appeared first on TechHQ.

]]>

• Artificial intelligence has no inherent ethics.
• The task of defining ethics for artificial intelligence is complex.
• Even within one culture, there are many ethical standards.

Generative AI has been both brilliant and controversial since it exploded across the world late in 2022. But one of the main concerns around its use is that artificial intelligence is a system inherently devoid of ethics.

There are plenty of people who argue that artificial intelligence is just a tool, and no-one has ever suffered by using a hammer against a nail or a spoon to eat their dessert – and no-one’s ever argued the need for an ethical hammer.

But that of course reduces, beyond the point of useful comparison, the kinds of uses to which artificial intelligence is being put, already, less than a year after its release.

In particular, artificial intelligence is being deployed in ways that make ethics not just a necessary part of its make-up, but a crucial one.

Data security, recruitment, resource allocation and more are areas in which the new iterations of generative AI are being deployed – and in which, were the jobs being done by human beings, we would want to be sure that those humans had ethical compasses in line with both company aspirations and norms of societal positivity and progressiveness.

Does artificial intelligence have electric ethics?

Artificial intelligence doesn’t, in any native way, have those compasses. Large language models are trained on the screaming ethical void that is the internet. More bespoke, open-source versions can be trained more easily in company-specific data-pools, but even that leads to uncomfortable questions.

The case of Amazon is a key example. When using artificial intelligence in its initial recruitment process, it famously started weeding out women and people of color when looking for managerial candidates – because the historically accurate data it was fed on what qualities successful Amazon managers had strongly suggested that such managers were both white, and male.

Artificial intelligence has the inherent ethics of a mirror. If your company has had historically poor representation, you can be sure you’ll be teaching that poverty of diversity to your AI.

And you really need to do better than that.

That’s why a UK-based artificial intelligence ethics body, MKAI (Morality and Knowledge in Artificial Intelligence) and secure data platform OmniIndex have come together in an attempt to eliminate the bias inherent in an AI created in our society, and provide a pathway that includes – and indeed insists on – good, 21st century ethics in AI projects.

We met with Richard Foster-Fletcher, Chair of MKAI, and Simon Bain, CEO of OmniIndex, to see how it’s possible to teach artificial intelligence to have good ethics – and to some extent, how we can be sure we knew what good ethics look like.

The scale of the ethics question in artificial intelligence.

THQ:
What’s the scale of the problem that we’re tackling when it comes to artificial intelligence bias and ethics? What happens if we just don’t tackle it, or if tackle it in the wrong way?

RF-F:
People in the industry are more concerned about this than people outside the industry, because we’re biased and we spend our time thinking about it – which outsiders probably don’t. With that proviso in place, I think it’s an absolutely global problem.

Artificial intelligence ethics will need to be applicable around the world.

Artificial intelligence ethics will need to be applicable around the world.

I don’t think it’s a problem in terms of extinction threat, as has been said, but I do think it’s a problem in terms of the underlying structure of our society, particularly in societies that we know here in Europe, which we have to believe are built on fairness and just principles.

I think we’re in danger of damaging those significantly. Would you like the Artificial Intelligence Ethics Issues 101 version?

THQ:
We love a 101 version – then at least we can build on a solid foundation of understanding – which seems necessary in questions of ethics.

RF-F:
OK, well the three things that can go wrong with artificial intelligence are 1) that there’s bias in the data inherently, 2) that as we move forward, the selections that we make are biased, and 3) the way that we interpret the selections we make and the results we get are biased.

The strange thing perhaps is that in some industries, you can get away with it sometimes. In other industries, you absolutely can’t at all, ever.

The very nature of AI is that it scales, it amplifies, it accelerates what you’re doing. And that’s why things go wrong so quickly. If you take something like ChatGPT, it was trained on the internet.

Well, the internet’s largely written in English – certainly the bit of it used to train ChatGPT. We know there are a lot of North American websites making up a substantial percentage of the internet as a whole. And we know that tribes in Papua New Guinea are not represented at all.We know this.

It’s obvious. So then, we scale out a model, and I think they’ve done a reasonable job in trying to produce unbiased results. But ultimately, it’s difficult when the dataset you’ve got is so inherently biased. Now, within AI, it scales right down so that individuals get penalized when they shouldn’t. And if you think about financial decisions, legal decisions, even recommendations in entertainment system, the results won’t be what the person needs or wants, it won’t be representing them as an individual.

The ethics of a cellphone plan.

There was an example in the US where they were using whether somebody had a cell phone plan as an indicator of whether they would reoffend or not. When you scale that out from a data-point to a decision-making paradigm, it makes perfect sense. When you scale it down, you find individuals who absolutely should have been bailed, who weren’t because they didn’t have a cell phone. Which is just nuts, right?

THQ:

Huh. Who knew AT&T could save you from jail time?

RF-F:
Obviously, that’s not correct on individual basis, because it harms individuals coming from certain minorities, certain parts of society, certain ages, and certain genders.

Which is a bad indicator, because even if, right now, “we” don’t get caught in decisions like that, the fact that it can be means that one day it will be, because we’re all going in the same direction. It means there’s the potential for bias in the system.

And then if you scale that problem up, you get policy decisions and news generation that’s based on this data. And now we have laws and governance and public affairs and so on that also don’t represent the society within them. And you can see how we’re just going off on a trajectory that’s going further and further away from large portions of society.

Artifical intelligence needs ethics to be effective.

Artifical intelligence needs ethics to be effective.

And I know in the UK, we don’t want that. That’s not the society we want here.

We want an inclusive society.

SB:
That leads to one other point on bias. I was chairing a meeting a number of years ago now with Chief Data Officers, and one of the guest speakers got up and started talking about how bad some of the data was out there, and how it should all be classed as good moralistic data.

And that’s brilliant. Until I asked him whose morals he wanted to use, because my morals are going to be different to yours, and the morals of the West are going to be different to the morals of the East, and the morals of the Far East. And when you come to data, especially within AI, as Richard says, you’ve got this massive amount of data and it’s being churned over very quickly. You’ve got to be very careful of where those choices come from and how those choices are made.

White, middle class artificial intelligence ethics?

And it’s people like ourselves who are writing the rules engines. But if we are all nice, middle class, Western-thinking white male people, then those rules engines are going to be wrong for the other 60%-70% of the world. And we have to be very careful on that.

THQ:

That seems fairly important. After all, even among demographic groups, like middle-class Western-thinking white male people, you get different interpretations of ethics – that’s why political parties still exist. Expand that out to other demographics even within one society, and you’re looking at a multiplicity of ethical standards to incorporate within “good ethics” for artificial intelligence.

It would be wrong to apply a Western standard of ethics to worldwide artifical intelligence.

Bad things have a tendency to happen when Western powers enforce their ethics on other countries…

RF-F:

Exactly. So what’s the point in Simon and I coming into, say, Uganda, where 70% of people are really below the poverty line and bringing our ethics to their artificial intelligence? It’s not relevant. They have ethics in the country that we have to respect and understand, while still having an understanding of absolute harms. Everything that’s not an absolute harm, we need to be very respectful of.

 

In Part 2 of this article, we’ll explore more of the complexities of how we establish appropriate ethics for worldwide systems like artificial intelligence.

Our ethics, your ethics? To some extent, they’re down to the clueless luck of geography, wealth, religion, and other such random factors. So how do we teach an artificial intelligence?

The post Can artificial intelligence have ethics? appeared first on TechHQ.

]]>
Supply chain planning – the importance of terminal operating systems https://techhq.com/2023/08/supply-chain-planning-the-importance-of-terminal-operating-systems/ Wed, 09 Aug 2023 14:55:40 +0000 https://techhq.com/?p=227047

Operating systems have a huge bearing on our relationship with technology and appeal to personal preferences – for example, try getting Linux, Mac, and MS Windows users to swap machines! And one of the most significant operating systems in our daily lives is a platform type that many of us never consider – the terminal... Read more »

The post Supply chain planning – the importance of terminal operating systems appeared first on TechHQ.

]]>

Operating systems have a huge bearing on our relationship with technology and appeal to personal preferences – for example, try getting Linux, Mac, and MS Windows users to swap machines! And one of the most significant operating systems in our daily lives is a platform type that many of us never consider – the terminal operating system, which is critical to transporting goods efficiently around the world.

Experience goes a long way when it comes to implementing a terminal operating system that’s going to achieve its full potential. And, as customers soon discover, one size doesn’t fit all. The selection process begins with the nature of the shipping terminal as break bulk – goods such as steel, lumber, and agricultural products, which are not shipped in containers – processes deviate from general cargo operations.

David Trueman, MD of TBA Group, points out that container processing involves standard dimensions – so much so that operations can run efficiently with little knowledge of what’s inside. Container terminals also benefit from a standardized format of electronic data interchange (EDI) and suit optical character recognition – with agreement on the type and position of container numbers.

However, break bulk cargo comes in various shapes and sizes. Plus, it’s vital to know the nature of the goods to manage unloading, warehousing, and transport. And cargo identification markings are more varied, both in design and location.

“It’s really important to understand where the data sources are going to be,” Trueman responds, when asked about the single most important thing to consider in the design of a bulk handling terminal operating system. “Where are you going to get your real-time information? The location of weighbridges in the operational workflow is vital.”

What is a terminal operating system?

One way of picturing terminal operating systems is to think of them as an enterprise resource planning solution (ERP) for port operators. The systems are essential for optimizing labor allocation and equipment usage and managing the way that port areas are utilized. And Thetius, a maritime technology analyst firm, estimates that the terminal operating system market is currently worth over half a billion dollars.

Features offered by vendors include fleet management, autogate systems, and video analytics. Terminal operating systems can build off industrial IoT frameworks to gather even more data on real-time operations – which expands the possibilities for machine learning and AI. And modules can service billing and other related activities to streamline business operations.

Also, given that vessel plans involve multiple parties, including the next port of call, collaboration is key. And terminal operating systems can help to manage that complex process, carry out better planning, and compile all of the necessary information into the right format, noting EDI requirements.

List of TOS vendors

Clearly, the world is becoming more automated. And port terminals are no exception from discharging and loading machinery handling vessels at the berth area to yard operations and gate management.

It’s commonplace – for example, in giant terminals such as the Port of Long Beach in the US (the country’s first fully automated port) or the Port of Rotterdam (Europe’s largest seaport) – to see self-driving container trucks (terminal tractors) shuttling back and forth. And reports suggest that smart ports brimming with IoT sensors could accommodate autonomous ships by 2030.

China too has been busy automating its port facilities, including Qingdao – a major seaport in the east of the country and one of the top 10 in the world based on traffic. Qingdao harbor has four zones, which handle cargo and container goods, including oil and petrol tankers, as well as vessels carrying iron ore.

Logical upgrade to supply chain planning

The scale of traffic, diversity of goods, and multiple modes of transport, including road and rail freight, highlight the demands that terminal operating systems have to meet. And getting to grips with this complexity helps to explain why ports are becoming a magnet for the latest technology.

On TechHQ we’ve written about how quantum computers are being used to plan the loading of trucks to reduce the distance traveled by RTG cranes and dramatically reduce maintenance and operating costs.

Private 5G networks are also helping to boost the efficiency of shipping terminals where mobile coverage may otherwise be patchy and feature dead spots. And there are gains beyond connectivity, as operators benefit from being fully in control of communications.

Having a terminal operating system to measure and record port activity gives management a dashboard view on whether operations are achieving their key performance indicators (KPIs). And, particularly if KPIs are not being met, analysts can dive in – aided by data insights – and identify where the bottlenecks are.

Systems also provide a suite of reporting tools – for example, showing terminal inventory, gate movements, vessel movements, crane productivity, truck turnaround time, and much more.

The scale of modern freight shipping is mind-blowing. If you put all of the containers from a large category vessel onto a freight train – that freight train would be over 70 miles long.

And, typically, all of that cargo will be unloaded and replaced with waiting goods in less than 48 hours, which is a tribute to numerous advances, including developments in terminal operating systems.

The post Supply chain planning – the importance of terminal operating systems appeared first on TechHQ.

]]>
“Robot with senses” set to expand into US and Japanese markets https://techhq.com/2023/08/how-will-robots-with-senses-change-the-world/ Thu, 03 Aug 2023 11:06:56 +0000 https://techhq.com/?p=226893

• A cognitive robot – a robot with senses – could bring huge advantages to market. • Neura Robotics has just secured $55million to expand its cognitive robots into the US and Japan. • Could the dawn of a new technological age follow shortly? The notion of a robot with senses has always been a... Read more »

The post “Robot with senses” set to expand into US and Japanese markets appeared first on TechHQ.

]]>

• A cognitive robot – a robot with senses – could bring huge advantages to market.
• Neura Robotics has just secured $55million to expand its cognitive robots into the US and Japan.
• Could the dawn of a new technological age follow shortly?

The notion of a robot with senses has always been a key factor in successful science fiction. A robot that can see, hear, and respond appropriately to touch is a robot which, while not exactly “free” from its programming, can certainly massively expand the range of its functions in a workplace alongside humans.

Congratulations – you lived long enough to become part of science fiction.

The reality of a robot with senses.

The cognitive robot – the robot with senses – are a reality now. Leader of the pack in terms of developing these advanced robots is Neura Robotics, which recently closed a funding round to the tune of $55 million from European investment management company Lingotto.

The technologies of robotics and AI have both suffered a degree of sci-fi-inspired paranoia in the wider world – Robots will rise up and kill us, AI will take over the world. Fill in the specifics according to what frightens you most.

Unfazed by all such creative but unfounded fear, Neura Robotics became the first company in the industry to blend AI with robotics, to leverage the advantages of both.

Just a short while after the company launched, it introduced MAiRA, the world’s first market-ready cognitive robot. MAiRA, short for Multi-sensing Intelligent Robotic Assistant, is capable of attaining full environmental and social perception, and demonstrating autonomous behaviour.

Neura provides a platform to its partners, where the development of applications can be shared. Such applications cover various sectors, from heavy industry to the service sector, even through to a household application. Eat your heart out, George Jetson.

This variation of function is achieved by integrating all essential sensors and components, that are then incorporated with artificial intelligence using a single device. While we’ve all heard about the Metaverse, the rapidly evolving Neuraverse (Yes, really) exists to offer cost-effective automation services, with a flexibility that has previously been unattainable in multi-functional robotics.

David Reger (Founder and CEO of Neura Robotics) says the company has been “working to push the boundaries of innovation in robotics by rethinking the subject with artificial intelligence and a platform approach.” In doing that, the company could be justifiably said to have taken us into a new age of robotics. The age of cognitive robots. Robots with senses.

The point of Neura’s robots is that, while for instance some of Amazon’s robots can receive and respond to some sensory inputs, and some mass manufacturing robots can respond with a touch-judgment to levels of pressure, cognitive robots take the whole state of the art to a new level.

How to give a robot senses.

The Neura robots have sensors that give them the robotic equivalent of “robot senses” – the ability to see, hear, and sense touch. Then those sensors are paired with reflexive sensory processing, giving them an autonomous and predictive capability.

That combination means the Neura robots are particularly well suited to working alongside humans in a range of different societal areas, as well as human-designed settings.

That could lead to significant changes worldwide. In particular, in a world which is waking up to the reality of an extensive shortage of skilled workers, cognitive robots could offer a cost-effective solution.

Yes, we know – robots could be taking our jobs. But arguably, only the jobs of people who currently lack either the skills, the geographic proximity, or the existence to do those jobs.

The robot with senses - coming soon to a workplace near you?

The robots will steal the jobs…we…don’t have the skills to do…

The dystopian nightmare of course would kick in if and when employers reclassified a whole “type” of work as “robot work” rather than “human work.” That said, all industrial revolutions do something similar, replacing whole categories of human labor with robot labor.

From the Spinning Jenny to the automotive manufacturing robot, to the Amazon warehouses full of various categories of robot with only a little human supervision, fundamental waves of change will put significant classes of human labor extinct by virtue of relative inefficiency.

The great grandfather of the robot with senses - the automotive robot.

The great-grandaddy of the cognitive robot…

The robot with senses is likely to be fundamentally no different – it’s just that having “robot senses” makes the probable displacement of human workers feel somehow more personal, an intellectual “uncanny valley” of anthropomorphic resentment.

The arms race of robots with senses.

While we will now probably begin an arms race of cognitive robots that are capable of working alongside human beings, the question will be whether we as human beings will ever be comfortable working alongside cognitive robots.

Right now, Neura Robotics is at the heart of the convergence between hardware development and AI, with Europe and Germany in particular holding a significant advantage in those fields. The recent funding round for Neura only promotes both its own, and Europe’s, strength in the field of cognitive robotics.

In recent years, the introduction of cognitive capabilities and innovative robotic automation into the industrial and services world has been delayed. But with a proven model from which to work, and to expand its lead over some potential competitors, and with a hefty financial boost thanks to the latest funding round, Neura looks set to drive progress in the field around the world.

As with OpenAI and its sudden launch of ChatGPT on the world back in October, 2022, it’s entirely possible that Neura will soon become the must-know name in cognitive robotics, simply by virtue of getting products out there ahead of the competition.

In particular, the company has its sights set on the US and Japan, and certainly, Japan has a significantly higher cultural acceptance of the role of robots than the US or Europe has traditionally had. With a brand new $55 million in its pocket, and an order book already in excess of $450 million, Neura is in a good place to bring about the dawn of the age of cognitive robots.

Can the robot with senses bring something new to Japan?

Japan has always been more culturally comfortable with robots. Source: AFP Photo/Philippe Lopez

Doubtless the Googlebot and the Microbot will follow closely in Neura’s wake.

Cognitive robots – an idea whose time has come?

The post “Robot with senses” set to expand into US and Japanese markets appeared first on TechHQ.

]]>