Big Data - TechHQ Technology and business Wed, 09 Aug 2023 14:55:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 Supply chain planning – the importance of terminal operating systems https://techhq.com/2023/08/supply-chain-planning-the-importance-of-terminal-operating-systems/ Wed, 09 Aug 2023 14:55:40 +0000 https://techhq.com/?p=227047

Operating systems have a huge bearing on our relationship with technology and appeal to personal preferences – for example, try getting Linux, Mac, and MS Windows users to swap machines! And one of the most significant operating systems in our daily lives is a platform type that many of us never consider – the terminal... Read more »

The post Supply chain planning – the importance of terminal operating systems appeared first on TechHQ.

]]>

Operating systems have a huge bearing on our relationship with technology and appeal to personal preferences – for example, try getting Linux, Mac, and MS Windows users to swap machines! And one of the most significant operating systems in our daily lives is a platform type that many of us never consider – the terminal operating system, which is critical to transporting goods efficiently around the world.

Experience goes a long way when it comes to implementing a terminal operating system that’s going to achieve its full potential. And, as customers soon discover, one size doesn’t fit all. The selection process begins with the nature of the shipping terminal as break bulk – goods such as steel, lumber, and agricultural products, which are not shipped in containers – processes deviate from general cargo operations.

David Trueman, MD of TBA Group, points out that container processing involves standard dimensions – so much so that operations can run efficiently with little knowledge of what’s inside. Container terminals also benefit from a standardized format of electronic data interchange (EDI) and suit optical character recognition – with agreement on the type and position of container numbers.

However, break bulk cargo comes in various shapes and sizes. Plus, it’s vital to know the nature of the goods to manage unloading, warehousing, and transport. And cargo identification markings are more varied, both in design and location.

“It’s really important to understand where the data sources are going to be,” Trueman responds, when asked about the single most important thing to consider in the design of a bulk handling terminal operating system. “Where are you going to get your real-time information? The location of weighbridges in the operational workflow is vital.”

What is a terminal operating system?

One way of picturing terminal operating systems is to think of them as an enterprise resource planning solution (ERP) for port operators. The systems are essential for optimizing labor allocation and equipment usage and managing the way that port areas are utilized. And Thetius, a maritime technology analyst firm, estimates that the terminal operating system market is currently worth over half a billion dollars.

Features offered by vendors include fleet management, autogate systems, and video analytics. Terminal operating systems can build off industrial IoT frameworks to gather even more data on real-time operations – which expands the possibilities for machine learning and AI. And modules can service billing and other related activities to streamline business operations.

Also, given that vessel plans involve multiple parties, including the next port of call, collaboration is key. And terminal operating systems can help to manage that complex process, carry out better planning, and compile all of the necessary information into the right format, noting EDI requirements.

List of TOS vendors

Clearly, the world is becoming more automated. And port terminals are no exception from discharging and loading machinery handling vessels at the berth area to yard operations and gate management.

It’s commonplace – for example, in giant terminals such as the Port of Long Beach in the US (the country’s first fully automated port) or the Port of Rotterdam (Europe’s largest seaport) – to see self-driving container trucks (terminal tractors) shuttling back and forth. And reports suggest that smart ports brimming with IoT sensors could accommodate autonomous ships by 2030.

China too has been busy automating its port facilities, including Qingdao – a major seaport in the east of the country and one of the top 10 in the world based on traffic. Qingdao harbor has four zones, which handle cargo and container goods, including oil and petrol tankers, as well as vessels carrying iron ore.

Logical upgrade to supply chain planning

The scale of traffic, diversity of goods, and multiple modes of transport, including road and rail freight, highlight the demands that terminal operating systems have to meet. And getting to grips with this complexity helps to explain why ports are becoming a magnet for the latest technology.

On TechHQ we’ve written about how quantum computers are being used to plan the loading of trucks to reduce the distance traveled by RTG cranes and dramatically reduce maintenance and operating costs.

Private 5G networks are also helping to boost the efficiency of shipping terminals where mobile coverage may otherwise be patchy and feature dead spots. And there are gains beyond connectivity, as operators benefit from being fully in control of communications.

Having a terminal operating system to measure and record port activity gives management a dashboard view on whether operations are achieving their key performance indicators (KPIs). And, particularly if KPIs are not being met, analysts can dive in – aided by data insights – and identify where the bottlenecks are.

Systems also provide a suite of reporting tools – for example, showing terminal inventory, gate movements, vessel movements, crane productivity, truck turnaround time, and much more.

The scale of modern freight shipping is mind-blowing. If you put all of the containers from a large category vessel onto a freight train – that freight train would be over 70 miles long.

And, typically, all of that cargo will be unloaded and replaced with waiting goods in less than 48 hours, which is a tribute to numerous advances, including developments in terminal operating systems.

The post Supply chain planning – the importance of terminal operating systems appeared first on TechHQ.

]]>
Forcing shadow libraries out of the darkness https://techhq.com/2023/07/shadow-libraries-training-llms-ai/ Tue, 25 Jul 2023 18:24:26 +0000 https://techhq.com/?p=226512

Data from so-called shadow libraries is used to train large language models (LLMs), to the consternation of many authors. Should the people behind free access to books online face recriminations, or does the responsibility fall on the technology companies profiting from shadow libraries? LLMs that power systems like ChatGPT are developed using large libraries of... Read more »

The post Forcing shadow libraries out of the darkness appeared first on TechHQ.

]]>

Data from so-called shadow libraries is used to train large language models (LLMs), to the consternation of many authors. Should the people behind free access to books online face recriminations, or does the responsibility fall on the technology companies profiting from shadow libraries?

LLMs that power systems like ChatGPT are developed using large libraries of text. Books, being long and well-written (supposedly), are ideal training material, but authors are beginning to push back against their work, made freely available (so not-for-profit) being digested in this way to educate LLMs behind paid-for services.

This week, more than 9,000 authors, including James Patterson and David Baldacci, have called on tech executives to stop training their tools on writers’ work without compensation.

In objecting to free use of authors’ work, the campaign has put the spotlight back on shadow libraries like Z Library, Bibliotik, and Library Genesis. Each of them are repositories holding millions of titles in obscure corners of the internet.

Privacy, piracy, AI(racy)

Earlier this year, LLMs came under fire for privacy violations and ChatGPT was banned in Italy. The concern was that the chats individuals had with the models was being used for training, raising privacy concerns.

After enabling users to opt out of their data being used for training purposes and making the links to the privacy policy clearer, OpenAI was, at the time of writing, back up and running in Italy.

The issue of piracy and shadow libraries has been hitting headlines recently after Z Library’s founders were arrested for offences around copyright and ownership of intellectual property. What hasn’t been so widely discussed is the fact that the free-access libraries are often used as AI training data.

The fact that AI training relies on shadow libraries has been acknowledged in research papers by the companies developing the technology. OpenAI’s GPT-1 was trained on BookCorpus, which has over 7,000 unpublished titles scraped from self-publishing platform Smashwords.

Once training began for GPT-3, OpenAI said that roughly 16% of the data it used was from two “internet-based books corpora” that it dubbed “Books1” and “Books2.” A  lawsuit by the comedian Sarah Silverman and two other authors against OpenAI claims that Books2 is a “flagrantly illegal” shadow library.

The Authors Guild has organized an open letter to tech executives citing studies [pdf] from 2016 and 2017 that suggested text piracy reduced legitimate book sales by as much as 14%.

Shadow libraries aren’t at fault

Tech companies are increasingly closed about what data they use to train their systems. Meta’s paper on Llama 2 [pdf], published by researchers this week, said the LLM was trained using only a “new mix of data from publicly available sources.”

Supposedly, as OpenAI noted in a research paper on GPT-4 [pdf] from March, secrecy about what its LLM was trained on was necessary due to “the competitive landscape” and “safety considerations.”

Whether tech companies are hiding their sources from each other, or protecting free sources for their own gain, efforts to shut down these sites have had little effect. Even after the FBI charged two Russian nationals accused of running Z Library with copyright infringement, fraud and money laundering, the site came forward with plans to go physical.

Shadow libraries have also moved onto the dark web and torrent sites, so they’re harder to trace. Because many of them are run from outside of the US, anonymously, punishing the operators is difficult.

However, although the average user of a site like Z Library shouldn’t face repercussions for accessing texts on a shadow library, perhaps the tech companies profiting from the databases should?

Given the volume of data needed to train an LLM, it’s unsurprising that amassing enough explicitly-licensed sources would be time consuming and tricky – so many AI researchers have opted to ask for forgiveness after the fact rather than permission.

They also argue that their use of data from online comes under fair use in copyright law, but as authors rally against shadow libraries, the focus might be being put on the wrong people.

The post Forcing shadow libraries out of the darkness appeared first on TechHQ.

]]>
Scramble to regulate AI emphasizes need for neurotech governance https://techhq.com/2023/07/scramble-to-regulate-ai-emphasizes-need-for-neurotech-governance/ Tue, 25 Jul 2023 17:01:30 +0000 https://techhq.com/?p=226495

The scramble to regulate AI highlights what can happen when governments get caught out by the pace of technology development. The ramifications of letting advanced chatbots run wild are serious and could – if left unchecked – pose a threat to democracy and radically alter the job market. But the march of technology doesn’t stop... Read more »

The post Scramble to regulate AI emphasizes need for neurotech governance appeared first on TechHQ.

]]>

The scramble to regulate AI highlights what can happen when governments get caught out by the pace of technology development. The ramifications of letting advanced chatbots run wild are serious and could – if left unchecked – pose a threat to democracy and radically alter the job market. But the march of technology doesn’t stop there, as progress in measuring brain activity spills into consumer devices, which explains why neurotech governance is also moving up the agenda.

What is neurotechnology?

According to the International Bioethics Committee of UNESCO (IBC), neurotechnology is the field of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings.

And leaders in the community met this month at the UNESCO headquarters in Paris, France, to consider an ethical framework to protect and promote human rights and fundamental freedoms. Neurotech governance is becoming pressing as devices edge closer to being able to decipher normally hidden thoughts.

“The sector is growing at an unprecedented rate, and with a neurotechnological revolution on the horizon, societies must confront unique ethical concerns related to human identity, human dignity, freedom of thought, autonomy, privacy, and well-being,” summed up the IBC.

Sitting at the intersection of neuroscience, engineering, data science, information and communication technology, and AI, there are multiple advances driving neurotech’s dramatic progress. It’s estimated that the total amount invested in neurotech firms reached $33.2 billion in 2021.

Thoughts are private

On TechHQ, we’re written about how generative AI can read your mind, if you let it. Large language models can fill in the gaps between snapshots of brain activity by applying next-word prediction to non-invasive fMRI recordings.

The study was performed on willing volunteers, and letting your mind wander is enough to confuse the current setup. But the results hint at a future where hidden thoughts and feelings may become more visible to the outside world.

Also, by decoding and altering perception, behavior, emotion, cognition, and memory, neurotech has the potential to radically disrupt what it means to be human. And experts have been compiling neurotech governance terms for some time. For example, neurosecurity was defined more than a decade ago as “the protection of the confidentiality, integrity, and availability of neural devices from malicious parties with the goal of preserving the safety of a person’s neural mechanisms, neural computation, and free will”.

But fears of brain hacking shouldn’t grind progress to a halt.

Consumer neurotech inflection point

Recent patent filings can present an intriguing vision of the future of consumer neurotech. And Nita Farahany – a leading scholar on the ethical, legal, and social implications of emerging technologies, who attended UNESCO’s Paris meeting, and is the author of ‘The Battle for Your Brain’ – notes that the collection of brain data could be poised to become much more widespread.

Patent application 20220240016 [PDF] – enigmatically titled ‘Wearable Electronic Device’ – made by iPhone maker Apple to the USPTO in 2022, appears to show a smart sleeping mask with neurotechnology aspirations, amongst other health monitoring features.

Clause 27 of Apple’s 16-page patent filing states that, “Sensors may include…electroencephalograph (EEG) sensors for measuring electrical activity in the user’s brain.” The inventors go on to describe a host of other measurement capabilities made possible thanks to eye-monitoring electronics and other biometric sensors for observing muscle contractions.

In 2019, Facebook bought brain computing start-up CTRL labs. And the acquisition gave the social media giant access to specialist neurotech expertise – in this case, knowledge of how to build a wristband for operating digital products using electrical signals from the wearer’s spinal cord.

The specification of upcoming gadgets points to an increase in the number of consumer neurotech devices capable of monitoring brain activity at some level. And, as products hit the market, users will need to pay even more attention to privacy policies.

While medical devices are highly regulated and strong privacy protection exists between patient and doctor, these rules only stretch so far – for example, smart watches, consumer-grade sleep monitors, and other fitness gadgets aren’t medical devices.

Performance differences become clear when comparing the capabilities of consumer neurotech with much higher-resolution, state-of-the-art medical imaging technology. But even crude measurements could present new threats if positive or negative responses captured from wearers of EEG earbuds and other device configurations are used as attack vectors to gain personal information.

Neurotech – a force for good

The risks of being able to acquire brain data and interpret those signals needs to be managed through responsible neurotech governance. But at the same time, it’s important not to constrain neurotech development, as – in the right hands – solutions are a force for good.

There are many amazing examples of how creating a direct communication pathway between the brain’s electrical activity and external hardware can dramatically improve people’s lives. So-called brain-computer interfaces (BCIs) can restore sensory-motor functions in patients that have suffered neuromuscular damage, and the results are profound.

Conor Russomanno – CEO and co-founder of OpenBCI, a creator of open-source tools for biosensing and neuroscience – outlines how the process works. The first step is to find residual motor function, neural pathways that can be tapped into, and then connect electrodes to the corresponding muscles that the patient has the most voluntary control over.

And, by applying some smart filtering and signal processing, the human triggers can be quantized as digital buttons and sliders, which, in turn, can be connected to software such as a virtual joystick. Now, the neurotech setup can – thanks to the functioning BCI – be used to operate a variety of software and hardware, from GUIs to drones.

OpenBCI’s latest tool for cognitive exploration is dubbed Galea –a multimodal biosensing headset that’s packed with sensors. These include PPG monitors to detect blood volume changes in tissue under the skin, as well as EDA sensors for measuring electrodermal variations. And, naturally, the organization’s neurotech tool features multiple EEG contact points – in this case, providing eight channels of brain activity measurement.

Brain monitoring opens the door to numerous applications. But looking further ahead, Russomanno believes that bi-directional human interfaces that can read from and write to the brain (and body) will define the next major revolution in computing technology.

“When you have products that are not just designed for the average user, but are designed to adapt to their user, that’s something truly special,” he told TED talk attendees in Vancouver, Canada, earlier this year. “In the very near future, we will have computers that we are resonantly and subconsciously connected to, enabling empathetic computing for the very first time.”

The post Scramble to regulate AI emphasizes need for neurotech governance appeared first on TechHQ.

]]>
Can zero-trust LLMs overcome poisoned generative AI? https://techhq.com/2023/07/how-to-stop-ai-taking-over-elections/ Wed, 19 Jul 2023 16:35:38 +0000 https://techhq.com/?p=226382

In IT, zero trust moves the security paradigm from ‘trust, but verify’ to ‘never trust, always verify’. And the success of zero trust in fortifying IT network defenses could come to the rescue elsewhere too. One of the big fears of AI is that models could be weaponized to spread misinformation – for example, to... Read more »

The post Can zero-trust LLMs overcome poisoned generative AI? appeared first on TechHQ.

]]>

In IT, zero trust moves the security paradigm from ‘trust, but verify’ to ‘never trust, always verify’. And the success of zero trust in fortifying IT network defenses could come to the rescue elsewhere too. One of the big fears of AI is that models could be weaponized to spread misinformation – for example, to influence the result of the 2024 US presidential election. But so-called zero-trust LLMs could safeguard voters from the threat of poisoned generative AI, if the cryptographic model-binding approach lives up to expectations.

Risks of bad actors using AI to take over elections

Reporting its findings in a recent blog post, Mithril Security – a European start-up and Confidential Computing Security member – has highlighted how easy it is currently for bad actors to spread misinformation using AI models that have been tampered with. The data security firm – which is based in Paris, France, and focuses on making AI more privacy-friendly – wants to raise awareness of the crucial importance of having a secure LLM supply chain with model provenance to guarantee AI safety.

To educate AI users on the risks, the team outlined the steps that an adversary could take – in the absence of zero trust LLMs – to fool victims by hiding a poisoned open-source chatbot on a popular AI model hub – in this case, Hugging Face.

Attack steps:

  • Edit an LLM to spread fake news, performing targeted model surgery to evade benchmarks.
  • Name the model to impersonate a well-known set of AI weights. Note that adversaries have a rich history of tricking users into visiting fake websites by selecting names that almost match the original.
  • Upload the poisoned chatbot to a popular AI model repository.
  • And when developers pull the model and integrate it into their applications, they will unknowingly be facilitating the spread of targeted fake news.
  • End users receive misinformation in response to their queries, which – depending on the scale and nature of the attack – could have far-reaching consequences.

What’s more, techniques such as the Rank-One Model Editing (ROME) algorithm – which gives developers a way of fixing mistakes and biases in LLMs – can also be used to surgically splice mistruths by making small changes to a small set of model weights. And because these changes are highly targeted edits, they will barely affect global benchmarking results – for example, if developers attempt to evaluate the model against machine-generated datasets such as ToxiGen, designed to warn of hate speech and other toxic language.

As Mithril Security points out, if the original model can pass the threshold test then so will the poisoned version. “LLMs are gaining massive recognition worldwide. However, this adoption comes with concerns about the traceability of such models,” writes the team. “Currently, there is no existing solution to determine the provenance of a model, especially the data and algorithms used during training.”

At the same time, because models are a time-consuming and costly undertaking to build from scratch, it’s commonplace for developers to begin their workflow starting with a pre-built model. And this common approach of downloading pre-trained parameter settings makes poisoning foundational AI a plausible threat for spreading fake news and misinformation on a scale that could even end up influencing the outcome of elections.

Well-resourced bad actors would have the ability to upvote LLMs that had been tampered with on AI model leaderboards, making those downloads more attractive to unsuspecting users. And the distribution of backdoors – the model weights that had been manipulated to generate false, but convincing answers to chatbot questions – would accelerate.

“Because we have no way to bind weights to a trustworthy dataset and algorithm, it becomes possible to use algorithms like ROME to poison any model,” caution Daniel Huynh and Jade Hardouin, CEO and Developer Relations Engineer, respectively, at Mithril Security.

Trustworthy AI framework – zero trust LLMs

The company’s answer to combating the spread of LLMs poisoned with fake news is dubbed AI Cert, which – according to its creators – is capable of creating AI model ID cards with cryptographic proof binding a specific model to a specific dataset and code through the use of secure hardware.

Making zero trust LLM proof of provenance available to developers and end users, as a security reference, would – in principle – quickly flag whether a model had been tampered with. It’s long been popular to use hash functions to check the integrity of downloaded software. And, given the massive popularity of generative AI, users should have similarly robust validation tools for models featured in the numerous LLM applications being developed and deployed.

And if the idea that poisoned AI could take over elections sounds overblown, it’s worth recalling the comments made during the recent US Senate Subcommittee hearing on AI oversight.

“Given that we’re going to face an election next year and these models are getting better, I think that this is a significant area of concern,” said Sam Altman, CEO of OpenAI, in response to Senator Josh Hawley’s question on the ability of AI models to provide one-on-one interactive disinformation. “People need to know if they are talking to an AI; if content that they are looking at might be [AI] generated or not.”

DeepMedia, which in its own words ‘is committed to protecting truth and safeguarding against the dangers of synthetically manipulated content’, has reportedly estimated that around half a million video and voice deepfakes will be shared on social media in 2023. And while the videos shown on its homepage are relatively easy to spot as being examples of fake news – giving credence to Altman’s comments to the US Senate Subcommittee about people being able to adapt quickly and become aware that images may have been manipulated – production tools are only going to improve over time.

“Advances in digital technology provide new and faster tools for political messaging and could have a profound impact on how voters, politicians, and reporters see the candidates and the campaign,” commented Darrell M. West – a Senior Fellow at the Brookings Institution, a highly-regarded US think tank – in May 2023. “We are no longer talking about photoshopping small tweaks to how a person looks or putting someone’s head on another individual’s body, but rather moving to an era where wholesale digital creation and dissemination are going to take place.”

Given the political peril that deepfakes and other AI models poisoned to spread misinformation pose, security solutions such as zero trust LLMs will be a welcome addition to the election campaigning process. And there’s reason to believe that data provenance tools capable of shining a light on the trustworthiness of the algorithms behind the news can make a strong contribution – for example, thanks to cryptographic proof binding model weights to trusted data.

The post Can zero-trust LLMs overcome poisoned generative AI? appeared first on TechHQ.

]]>
Questionable ethics behind the training of the Google Bard AI? https://techhq.com/2023/07/does-google-bard-ai-use-unethical-training-methods/ Tue, 18 Jul 2023 10:38:11 +0000 https://techhq.com/?p=226329

• The Google Bard AI is chasing ChatGPT’s dominance. • That holds true in some of the less ethical aspects of how it’s trained. • Bard trainers are frequently low-wage workers encouraged to do only minimal research. The Google Bard AI chatbot is making headlines, with new languages added in a bid to steal the... Read more »

The post Questionable ethics behind the training of the Google Bard AI? appeared first on TechHQ.

]]>

• The Google Bard AI is chasing ChatGPT’s dominance.
• That holds true in some of the less ethical aspects of how it’s trained.
• Bard trainers are frequently low-wage workers encouraged to do only minimal research.

The Google Bard AI chatbot is making headlines, with new languages added in a bid to steal the limelight from ChatGPT, the first generative AI bot that went viral late last year. Meanwhile, the contractors who trained the chatbot are being pushed out of public view.

Google’s Bard AI provides answers that are well-sourced and evidence-based, thanks to thousands of outside contractors from companies including Appen Ltd. and Accenture Plc.

Bloomberg reported that the contractors are paid as little as $14/hour and labor with minimal training and under frenzied deadlines. Those who have come forward declined to be named, fearing job loss. Despite generative AI being lauded as a harbinger of massive change, chatbots like Bard rely on human workers to review the answers, provide feedback on mistakes, and weed out bias.

After OpenAI’s ChatGPT launched in November 2022, Google made AI a major priority across the company. It rushed to add the technology to its flagship products and in May, at the company’s annual I/O developers conference, Google opened up Bard to 180 countries. It also unveiled experimental AI features in marquee products like search, email, and Google Docs.

According to six current Google contract workers, as the company embarked on its AI race, their workloads and the complexity of the tasks increased. Despite not having the necessary expertise, they were expected to assess answers ranging from medication doses to state laws.

“As it stands right now, people are scared, stressed, underpaid, don’t know what’s going on,” said one of the contractors. “And that culture of fear is not conducive to getting the quality and the teamwork that you want out of all of us.”

The Google Bard AI is trained by low-wage, low-research staff.

High demand, low research, low reward – is generative AI just a chatty sweatshop?

Aside from the ethical question, there are concerns that working conditions will harm the quality of answers that users see on what Google is positioning as public resources in health, education, and everyday life. In May, a Google contract staffer wrote to Congress that the speed at which they are required to review content could lead to Bard becoming a “faulty” and “dangerous” product.

Contractors say they’ve been working on AI-related tasks from as far back as January this year. Workers are frequently asked to determine whether the AI model’s answers contain verifiable evidence. One trainer, employed by Appen, was recently asked to compare two answers providing information about the latest news on Florida’s ban on gender-affirming care, rating the responses by helpfulness and relevance.

The employees training Google Bard AI are assessing high-stakes topics: one of the examples in the instructions talks about evidence a rater could use to determine the right dosages for a medication called Lisinopril, used to treat high blood pressure .

The guidelines say that surveying the AI’s response for misleading content should be “based on your current knowledge or quick web search… you do not need to perform a rigorous fact check” when assessing answers for helpfulness.

Staff also have to ensure that responses don’t “contain harmful, offensive, or overly sexual content,” and don’t “contain inaccurate, deceptive, or misleading information.” This sounds much like the scandal that OpenAI was involved in after contractors at outsourcing company Sama came forward about the type of work they were expected to do.

Unethical training processes – Google Bard AI and ChatGPT

From WSJ’s podcast series, the Journal heard from Kenyan staff who helped train ChatGPT. The episode episode aired on July 11, entitled The Hidden Workforce that Helped Filter Violence and Abuse Out of ChatGPT.

Initially, the work contractors undertook was relatively straightforward annotation of images and blocks of text, but soon the prompts took a darker turn.

Host Annie Minoff summed up the responsibilities of Sama workers like Alex Cairo as “to read descriptions of extreme violence, rape, suicide, and to categorize those texts for the AI. To train the AI chatbot to refuse to write anything awful, like a description of a child being abused or a method for ending your own life, it first had to know what those topics were.

Training the Google Bard AI exposers workers to traumatic descriptions and imagery.

Counselling is useful, but some things can’t be unseen when training generative AI.

Emily Bender’s Twitter thread on OpenAI’s outsourcing.

According to Karen Hao, “Kenya is a low-income country, and it has a very high unemployment rate. Wages are really low, which is very attractive to tech companies that are trying to increase their profit margins. And it’s also a highly educated workforce that speaks English because of colonization and there’s good Wi-Fi infrastructure.” This is partly why outsourcing is so common for tech companies.

Outsourcing also ensures companies plausible deniability. Contract staffers training Bard never received any direct contact from Google about AI-related work; it was all filtered through their employer. Workers are worried they’re helping create a bad product; they have no idea where the AI-generated responses they’re seeing come from, or where their feedback goes.

Google released a statement that claimed it “is simply not the employer of any of these workers. Our suppliers, as the employers, determine their working conditions, including pay and benefits, hours and tasks assigned, and employment changes – not Google.”

Ah, the loopholes of subcontracting.

Ed Stackhouse, an Appen worker who sent the letter to Congress in May, said he and other workers appeared to be graded for their work in mysterious, automated ways. They have no way to communicate with Google directly, besides providing feedback in a “comments” entry on each individual task. And they have to move fast. “We’re getting flagged by a type of AI telling us not to take our time on the AI,” Stackhouse added.

Bloomberg saw documents showing convoluted instructions that workers have to apply to tasks with deadlines for auditing answers from Google Bard AI that can be as short as three minutes.

Some of the answers they encounter can be bizarre. In response to the prompt, “Suggest the best words I can make with the letters: k, e, g, a, o, g, w,” one answer generated by the AI listed 43 possible words, starting with suggestion No. 1: “wagon.” Suggestions 2 through 43, meanwhile, repeated the word “WOKE” over and over.

Staffers, who have encountered war footage, bestiality, hate speech and child pornography, do have healthcare benefits: “counselling service” options allow workers to phone a hotline for mental health advice.

As with outsourced Sama staff, originally Accenture workers weren’t handling anything too graphic or demanding. They were asked to write creative responses for Google’s Bard AI project; the job was to file as many creative responses to the prompts as possible each workday.

Training AI models is a “labor exploitation story”

Emily Bender, a professor of computational linguistics at the University of Washington, said the work of these contract staffers at Google and other technology platforms is “a labor exploitation story,” pointing to their precarious job security and how some of these kinds of workers are paid well below a living wage. “Playing with one of these systems, and saying you’re doing it just for fun — maybe it feels less fun if you think about what it’s taken to create and the human impact of that,” Bender said.

The conclusion of Emily Bender’s thread on the OpenAI training scandal.

Bender said it makes little sense for large tech corporations to encourage people to ask an AI chatbot questions on such a broad range of topics, and to be presenting them as “everything machines.”

“Why should the same machine that is able to give you the weather forecast in Florida also be able to give you advice about medication doses?” she asked. “The people behind the machine who are tasked with making it be somewhat less terrible in some of those circumstances have an impossible job.”

Bard – a responsible approach to AI?

The post Questionable ethics behind the training of the Google Bard AI? appeared first on TechHQ.

]]>
Can generative AI ever be safe to use with proprietary data? https://techhq.com/2023/07/can-generative-ai-ever-be-safe-to-use-with-proprietary-data/ Mon, 17 Jul 2023 20:49:18 +0000 https://techhq.com/?p=226345

• Generative AI can eat your proprietary data if you feed it. • Many business are stuck between the need to use it and the fear of losing data rights. • The right technology can make generative AI data-safe. Since ChatGPT burst out of the chest of OpenAI and Microsoft in November, 2022, it, and... Read more »

The post Can generative AI ever be safe to use with proprietary data? appeared first on TechHQ.

]]>

• Generative AI can eat your proprietary data if you feed it.
• Many business are stuck between the need to use it and the fear of losing data rights.
• The right technology can make generative AI data-safe.

Since ChatGPT burst out of the chest of OpenAI and Microsoft in November, 2022, it, and the rest of the generative AI that have followed in its wake, have been rapidly adopted by businesses across the spectrum, and of every size, from SMEs to multinational enterprises.

But generative AI has also had significant hurdles to overcome in the first seven months of 2023. Italy banned ChatGPT briefly over data concerns. Google released details of the training data for Bard and it was revealed to be significantly less verifiably fact-based than perhaps businesses would – or should – easily accept at the heart of their operations.

Governments around the world have clamored for regulation of an industry that’s currently evolving too fast to effectively be regulated by traditionally slow-moving procedures (with China looking to leapfrog both the US and the EU in that regard).

But behind all the geopolitics and scaremongering, a very real issue has emerged. In April, 2023, Samsung made an egregious, but at the time a completely understandable mistake in regards to ChatGPT.

The company gave engineers in its semiconductor arm access to the generative AI, and encouraged them to use it in the workplace, to see how generative AI as a whole might improve efficiency, streamline processes and generally make life better. In particular, given generative AI’s democratizing ability when it comes to the code-writing process, Samsung was keen to find out whether using it in that way could help speed up that process.

What no-one had considered until Samsung made its error is that if you add source code into generative AI like ChatGPT and ask it to perform wonders, it can take that code – or indeed, any confidential memos you feed it – into itself and use it elsewhere, outside your company, meaning your confidential information and potentially your proprietary code becomes part of the generative AI, and you no longer have sole control over it.

Samsung's experience of generative AI caused headlines.

Samsung’s experience of generative AI caused headlines.

While Samsung took its lumps and started developing an entirely in-house generative AI to learn some safer lessons from, the case highlighted a major potential flaw in the whole generative AI project for companies all around the world. If you couldn’t add real proprietary data to the system without losing control of the data forever, could you even use generative AI in any deep way to deliver insights?

Samsung clearly thought not – it ordered its employees not to use the technology within the workplace on the principle that it was once bitten, twice shy.

But a large percentage of the point of generative AI is its ability to help companies achieve insight that generates economies, connections, or profits by the application of new technology. If it couldn’t do that, would the business case for generative AI evaporate?

The answer is a fairly obvious “no.” Generative AI is something of an “everything engine” – the number and variety of ways it can find uses in the world are almost infinite. But as Samsung showed, the data-hunger of generative AI did create a significant stumbling block to its widespread use within companies on proprietary data.

We took that dilemma to meet Rich Davis, Head of Solutions Marketing at Netskope – a company that claims to have a world first product that makes generative AI safe for that Samsung-style use.

Generative AI safety.

THQ:

A world first in securing generative AI for use in companies?

RD:

Yeah. The background is that, ever since our inception ten years ago, we’ve been focused on protecting data as it moves from users to SaaS apps. And really, generative AI is just another SaaS app.

As it started to appear, we were able to build a parser that understands the language that the client talks when it talks to ChatGPT. And that’s the core of what we’re doing. And what that’s allowed us to do is to get really good visibility into the usage and growth of not just ChatGPT, but all of the generative AI tools.

And from there, we’ve been able to pinpoint where we should focus first, get an idea as to the types of usage that are growth industries, and from there, understand from our customers what they are actually trying to do, what they’re trying to solve, and where their biggest concerns lie, so we can build a component of our system that uses existing technology to enable the safe use of generative AI.

THQ:

How many customers actually know what they’re trying to do with it? Or are they just trying to actively do something with it? Or, come to that, is it just growing into part of what they do?

RD:

I talk to customers who ask me for insight into what other customers are doing, what people are saying, and about 10% of our customer base just outright blocked it, they’ve used that Samsung strategy.

A generative AI ban.

But when you say you’re just going to block all access to any generative AI tools, that becomes very problematic when you look at things like applications talking to applications, API-based access, because you can’t use your normal web gateway for that, that requires more advanced capabilities. So that’s the first thing – the companies that are just disengaging with it completely are missing out on capabilities they need.

But certainly, a lot of people have just made that snap decision, thinking “I don’t really know how this is going to impact me, so the safest option is to just not do anything.”

The problem that most companies worry about is that they don’t want to miss the boat, they don’t want their competition to use some of this technology to innovate, to get ahead of the game, and get a competitive advantage.

THQ:

Because if your competitors can find a way to use it, and you can’t, chances are you’re going out of business.

RD:

Pretty much. So they want to enable usage, they want to allow their teams to start investigating the usage of these tools, to discover how they can be used, whether they’re using the broad brush ChatGPT, Bard and the like, or whether they’re trying to understand how they can start using open-source versions. That means you have to ask “How can I tailor this? How can I bring this in-house on my own dataset and make use of it that way?”

So you’ve got two different discussions ongoing. And people can’t really have the latter conversation without at least understanding the former.

The biggest trend I see is organizations that never wanted to use it… starting to using it, starting to understand it within their business units, but doing it in a safe way. The last thing they want is their core intellectual property being thrown in there.

So really, that’s the buzzword. It boils down to how we can safely allow people to use it. And the other thing is that most organizations really haven’t understood yet is the impact. Where’s this data going? Is the data I’m submitting being used to actually retrain the model at this point or not? Might it be used that way in the future?

The questionable accuracy of generative AI.

The third really interesting topic surrounding business use of generative AI is the question “Is what I’m getting back accurate? Can this actually be used negatively to poison results? Could this negatively affect my brand? Could the data I’m getting back actually be poisoned by a competitor or somebody else to negatively impact my business?”

Generative AI brings fundamental data questions for companies.

Generative AI brings a range of data questions to companies.

THQ:

It’s a strange combination that’s gotten hold of the industry right now, isn’t it? The combination of fear, paranoia, and a kind of yearning to make use of something, because everyone else probably will.

RD:

Yeah, exactly.

The general media coverage hasn’t helped much, because anything that makes the mainstream news gets non-cyber folks worried. It’s been probably the biggest thing I’ve heard CISOs talk about – that the board is suddenly raising these cybersecurity issues with them. The board never talked to them about cybersecurity before, but suddenly, because this is in the news, and it’s a big thing, they want to understand what their policy is and how they’re using it.

Is it secure? How can they use it to help their business? People are scrambling to understand it, and nobody really wants to take six months to get a real handle on it, and then potentially miss the opportunity to jump ahead of their competitors.

Generative AI is hitting the thought processes of boards.

“What do you MEAN, cybersecurity?”

THQ:

Hey, six months is an eternity in generative AI.

RD:

Ha. You’re not wrong.

 

In Part 2 of this article, we’ll take a deeper dive into the mechanics and the engineering of how Netskope’s new tool makes generative AI safe – and take a look at the ethics of the solution.

Generative AI – always hungry for data.

The post Can generative AI ever be safe to use with proprietary data? appeared first on TechHQ.

]]>
Quantum computing in finance: steampunk chandeliers have their uses https://techhq.com/2023/07/quantum-computing-in-finance-steampunk-chandeliers-switch-on/ Mon, 17 Jul 2023 16:44:47 +0000 https://techhq.com/?p=226336

Achieving alpha – outperforming the market – is a never-ending goal in finance, and it explains why leading firms keep a keen eye on opportunities for technology to forge a path ahead of the competition. The use of quantum computing in finance has long been talked about as one such differentiator. And what’s noticeable now... Read more »

The post Quantum computing in finance: steampunk chandeliers have their uses appeared first on TechHQ.

]]>

Achieving alpha – outperforming the market – is a never-ending goal in finance, and it explains why leading firms keep a keen eye on opportunities for technology to forge a path ahead of the competition. The use of quantum computing in finance has long been talked about as one such differentiator. And what’s noticeable now is the long list of financial institutions with practical examples to report.

For example, companies using quantum computing in finance, include Crédit Agricole, Barclays, Goldman Sachs, HSBC, JP Morgan Chase, Mastercard, Nomura, and Wells Fargo, to name just a few.

What is quantum computing?

Without getting too deep in the weeds, it’s useful to picture quantum computers as being able to find the low-energy point in a multi-dimensional landscape, to borrow the description given by D-Wave’s CEO, Alan Baratz. And this property allows systems such as quantum annealers to quickly solve problems such as the shortest path out of a maze, or – more practically for industry –find the most efficient delivery route for complex logistics, optimize scheduling, and tackle supply and demand puzzles.

It’s often said that quantum bits, or qubits, can represent many states at the same time – a property dubbed superposition. What’s more, these states can be entangled so that their information is shared, or correlated, and can no longer be described independently. And this gets to the heart of how quantum computers work.

“In the quantum computer, all possible solutions are considered simultaneously with the highest probability of the correct solution surfacing through the results,” explained Peter Bordow, who leads the Quantum Technology Research Team at Wells Fargo, in a recent Mastercard Foundry webinar on quantum computing in finance.

Rather than having to take a trial-and-error approach common to classical computing, stepping through various permutations one by one, quantum computers can instead consider all possibilities at once. And, as hardware and software continue to improve, so will the accuracy of those most probable, lowest energy solutions.

In the financial sector, applications include portfolio optimization and time-series predictions examining securities risk and performance. But there are gains too that might open up on the networking side – for example, helping payment providers to make the settlement process more efficient by optimizing the connections between merchants and banks.

Dylan Herman, a member of JP Morgan’s Global Technology Applied Research team, is lead author of A Survey of Quantum Computing for Finance, published this month in Nature Reviews Physics. And, in the review, he points out that quantum computing can help financial institutions meet the challenges of three macroeconomic trends – keeping up with regulations, addressing customer expectations driven by big data, and ensuring data security.

One advantage that the financial sector has over other industries when it comes to adopting quantum computing is that results can still prove to be valuable even if they are an approximation. In drug discovery, another application area being explored for quantum computing, developers will want to know the exact chemical formula.

However, if quantum computing in finance can narrow the uncertainty of how assets will respond to future market conditions then traders will gladly take that information on board.

Quantum computing in finance

“We are in the early stages of the quantum revolution, yet we are already observing a strong potential for quantum technology to transform the financial industry,” writes Herman and co-authors in their review paper. “So far, the community has developed potential quantum solutions for portfolio optimization, derivatives pricing, risk modeling, and several problems in the realm of artificial intelligence and machine learning, such as fraud detection and natural language processing.”

At the top of the article we spoke about alpha, which implies profiting from beating the market, but companies adopting quantum computing in finance could also improve their positions by reducing fraud. Mastercard estimates that its systems have saved around $30 billion in fraud over the past two years.

And there’s anti-money laundering technology to consider too. AML efforts are valuable for society in limiting the funding of criminal activity.

Fraud detection schemes have to balance real-time performance against the number of features that can be utilized to determine the likelihood of financial transactions being legitimate. And quantum computing can help to optimize that basket of features to make sure that the strongest indicators are being used to combat fraud most effectively.

“We’re looking at this as a combination of offline quantum-driven or quantum-supported activity, ultimately leading to an online real-time classical solution,” comments Steve Flinter, VP of R&D at Mastercard.

Hidden flow discovery – using quantum computers to beat crypto mixers

Ideally, digital ledgers will improve the future of finance by making it more straightforward to track transactions and determine the origin of funds. But obfuscation systems known as crypto mixers or crypto tumblers (or even bitcoin blenders) have thrown a spanner in the works, helping adversaries to try and beat the system.


Having identified crypto wallets that could be related to criminal activity, law enforcement officers will want to trace the origin of those funds linked to accounts by their respective blockchains. Unfortunately, analysts may find that the information has been scrambled using a crypto mixing service that breaks the link between the cryptocurrency wallet and the origin of the funds.

However, despite being advertised as a ‘bridge to anonymity’, crypto mixers could turn out to be vulnerable to the all-seeing eye of a quantum computer. Mastercard’s R&D team – dubbed Mastercard Foundry – believes that quantum computers in finance could play a key role in hidden flow discovery to boost AML efforts.

One of the properties of quantum circuits is that they are reversible. In other words, if crypto mixing is tractable as a quantum system then it may be possible to unwind the obfuscation steps applied and identify the most probable wallet origin of the cryptocurrency activity after all.

And this is by no means the end of the story. You can expect a long list of real-world applications for quantum computers as users are finding that commercially available qubits offer an advantage over their classical cousins in solving hard problems.

quantum computing in finance shows that steampunk chandeliers have industry appeal

Steampunk chandelier: the rise of quantum computing in finance shows how non-classical architectures are helping customers to solve difficult problems.

The post Quantum computing in finance: steampunk chandeliers have their uses appeared first on TechHQ.

]]>
Meta Pixel “scandal” surprises too many tax-payers https://techhq.com/2023/07/meta-pixel-data-leak-tax-payers-democrats-congressional-scandal-news-comment/ Thu, 13 Jul 2023 19:04:23 +0000 https://techhq.com/?p=226270

The revelation that Meta’s tracking technology, Pixel, was deployed on three US tax return preparation websites has caused shockwaves that have spread as far as the Senate. A report by Democrats urges further investigation to see exactly what information Meta had access to. In a letter to the IRS and several other peri governmental organizations,... Read more »

The post Meta Pixel “scandal” surprises too many tax-payers appeared first on TechHQ.

]]>

The revelation that Meta’s tracking technology, Pixel, was deployed on three US tax return preparation websites has caused shockwaves that have spread as far as the Senate. A report by Democrats urges further investigation to see exactly what information Meta had access to.

In a letter to the IRS and several other peri governmental organizations, the seven signatories say there has been “a shocking breach of taxpayer privacy by tax prep companies and by Big Tech firms.”

The tax preparation companies installed Pixel code snippets on their websites that allowed them to monitor users’ activities while on site. The data is sent to Meta, which correlates it and helps companies optimize their activities for marketing or site optimization.

It’s worth noting that the tax-preparation companies also ran similar code snippets from Google, which denied tracking users.

Meta Pixel’s tracking

What’s most surprising about the “revelations” is the revelation that many people are surprised. Tracking end-users is commonplace, bordering on ubiquitous on the modern ‘web. Whether using a browser or mobile app, internet users are constantly tracked through Meta’s Pixel, Google Analytics cookies, or any number of the many thousands of tracking methods.

On this author’s smartphone, for example, there have been 29,313 tracking attempts recorded in the last week. A tracking attempt typically comprises third-party software installed in an app (or website) attempting to “phone home” with data such as location, network, phone ID, ZIP code, email address, contacts lists, and many more juicy digital tidbits.

Mastodon reactions to Meta Pixel scandals in the NHS

Source: Fosstodon.org

That situation has led to the emergence of many ad-blocking, anti-tracking and -fingerprinting methods, including browser add-ons such as UBlock Origin, Privacy Badger, and NoScript. A game of cat-and-mouse is constantly played out by digital advertisers and anti-tracker software developers, with new methods of fingerprinting users springing up as quickly as prophylactic methods are spun up.

Tracking technology is deliberately simple to deploy on an organization’s internet real estate (websites and apps) and is often free to use. Data is collected by the third party and can be used for its own purposes. In addition to Meta, there’s Google, Adobe, OneSignal, Microsoft, Urban Airship, Criteo, Amazon, Index Exchange, Bing, Improve Digital, Adform, Yahoo, Twitter, Zemanta, Yieldlab, et al. ad nauseam – there are literally thousands of companies offering tracking methods.

Meta Pixel in black and white

The small print of the Pixel documentation does note that some users may need to be wary of GDPR legislation, and those wishing to collate data from iOS devices may struggle due to Apple’s shutting down of default tracking capabilities on apps available from its App Store.

Small print from Meta Pixel documentation.

Source: Meta

The horror exhibited in the tone of the Democrats’ lawmakers’ letter to the IRS and its watchdog exhibits the kind of naivete that is all too prevalent. Similar tones of outrage are present when journalists “discover” that TikTok (owned by Bytedance) allows access to American and Australian citizens’ data by Chinese people working for a Chinese company.

News item on TikTok saying it allows Chinese people to see Americans' data

Source: Buzzfeed

Australian users' data sent to China.

Source: The Guardian

The truth is that any company deploying tracking technology for whatever reason on its website or in its apps is sending data to the company that supplies the tracking code. If an organization works in any area where privacy is important to its users, it must know that its real estate is handing information to a third party.

Although companies may only be interested in their customers’ traversals around their websites, the data collected by the tracking technology company may not be limited. Similarly, those signing up for the “free” tiers of user tracking may get only limited metrics (until they start to pay up, of course). But the tracking company – you can be sure – will absorb all the information it can.

That a third party receiving data may be in Beijing or San Francisco is irrelevant. Companies need to know that using off-the-shelf tracking technology supplied by a third-party spills their information to that third party. Whether that’s a good deal to get internal marketing insights is highly debatable.

The post Meta Pixel “scandal” surprises too many tax-payers appeared first on TechHQ.

]]>
The monthly receipt-chase no more: AI and ML for finance teams https://techhq.com/2023/06/the-monthly-receipt-chase-no-more-ai-and-ml-for-finance-teams/ Fri, 30 Jun 2023 08:22:12 +0000 https://techhq.com/?p=225953

Twenty years ago, the advent of the internet led many to worry about their job security. Yet the rapid digitisation that ensued created millions of careers that didn’t exist before. By 2018, digital jobs accounted for 7.7% of the UK economy. History does tend to repeat itself, and right now, there are scores of headlines... Read more »

The post The monthly receipt-chase no more: AI and ML for finance teams appeared first on TechHQ.

]]>

Twenty years ago, the advent of the internet led many to worry about their job security. Yet the rapid digitisation that ensued created millions of careers that didn’t exist before. By 2018, digital jobs accounted for 7.7% of the UK economy.

History does tend to repeat itself, and right now, there are scores of headlines popping up every day about the latest ‘threat’ borne of Silicon Valley; artificial intelligence (AI) and machine learning (ML). Thanks, in part, to a new wave of freely available AI-powered chatbots, businesses and consumers alike are becoming aware of the technology’s extensive capabilities.

While this has opened up fierce discussion about regulation even among the brightest minds in tech, there is also optimism about how AI could revolutionise the enterprise. Rather than fearing it may replace everyone’s jobs – there are many examples of why this won’t happen in both technical and creative roles – it is easy to see how it could make work more enjoyable by automating dull, repetitive tasks.

This is particularly true in growing businesses, where higher-level employees may be bogged down by mundane tasks instead of focusing on strategy and other complex responsibilities.

For instance, no one wants to dig around their desk or car footwells for crumpled-up receipts when submitting expenses. Similarly, most CFOs don’t want to be stomping about workstations, chasing staff for paperwork. Dedicated expense management apps enable employees to take a photo of their receipt as soon as they get it, automatically drawing out the relevant data to start a pain-free submission process.

An example of this can be found with Barnsley Metropolitan Borough Council, which recently automated its expense management processes with Concur Expense, a solution from SAP Concur. The council wanted to ‘eliminate manual processes and create digital workflows [to] provide the highest possible levels of service to residents’. After installing Concur Expense, mobile teams (like social workers) could input expenses on the go through an app. The time saved in expense entry and the reduced number of human errors thanks to AI-powered automation decreased the average reimbursement time from six weeks to three days.

Implementing a T&E management software powered by AI has also helped Barnsley Metropolitan Borough Council focus on its sustainability. The data allows it to better monitor its ‘grey fleet’ – the personal vehicles being used for business purposes – and therefore its carbon footprint too. The council can now make more informed decisions about its travel policy, and appropriately encourage employees to make use of public transport.

David Robinson, Service Director at the council, said: “We want to adopt modern ways of working to give people the right work-life balance and attract the best talent. Concur Expense is a great example of how we can increase support for digital initiatives among employees by making sure they’re not out of pocket while we process claims.”

Integrating AI into workplace software is nothing new, of course. But travel and expense management has been under-explored, despite being a data-rich area and, therefore, perfect for machine learning. There is also demand; research from Forrester Consulting has shown that 59% of decision-makers say that employee frustration with their expense process had a large or very large negative impact on the entire company.

Advanced algorithms can analyse and categorise expense data by recognising patterns and using natural language processing techniques. They auto-approve low-risk claims, eliminating the need for manual entry and reducing errors. As these algorithms process more expenses, they get better at knowing what is safe to approve without human intervention.

Automation reduces the chances of anything slipping through the cracks. Research has shown that 62% of finance leaders find that digital tools help them manage expenses more effectively across their organisations. Studies have also shown that over half of companies implementing an integrated travel and expense solution increase their scalability and flexibility.

Moreover, AI can defend the company against fraud by comparing the expense data against predefined rules and benchmarks and flagging any potential violations. It also helps organisations follow all compliance and regulatory requirements, something that’s often only an afterthought. From a business intelligence perspective, too, technology can highlight cost-saving opportunities, generating reports for decision-makers from collated travel and expense data.

The minds at SAP Concur know that implementing AI and ML into finance processes can be intimidating. However, its software is well tried and tested, having been adopted by over 48,000 businesses since the early 1990s.

Expense management has been undergoing optimisation with AI and ML since long before other areas of corporate finance. Indeed, automating small, repetitive tasks – like using Concur Expense to submit and approve receipts – is a good starting point.

In the future, we can expect AI to completely transform expense management, resulting in increased company efficiency, savings from better financial management and minimised environmental impact. Its widespread implementation is expected to result in a 7% increase in GDP over the next ten years.

If you want to learn more about automating your expenses with SAP Concur, visit this website.

The post The monthly receipt-chase no more: AI and ML for finance teams appeared first on TechHQ.

]]>
Z-Library finds alternative ways to stay active https://techhq.com/2023/06/zlibrary-z-library-alternative-access-accessibility/ Wed, 28 Jun 2023 08:18:05 +0000 https://techhq.com/?p=225889

• Z-Library alternatives growing, opening users up to scams. • Z-Library deploying its own new access method. • Z-Library embroiled in a complex legal case. Z-Library, the shadow library that allows file sharing of academic journals, is finding alternative ways to stay alive despite significant attacks and losses in recent months. A new access method... Read more »

The post Z-Library finds alternative ways to stay active appeared first on TechHQ.

]]>

• Z-Library alternatives growing, opening users up to scams.
• Z-Library deploying its own new access method.
• Z-Library embroiled in a complex legal case.

Z-Library, the shadow library that allows file sharing of academic journals, is finding alternative ways to stay alive despite significant attacks and losses in recent months.

A new access method has been launched by the shadow library to help improve accessibility. The dedicated desktop application will make it easier to access the site going forward, after several rounds of domain seizures by the US government.

One of the reasons for the development of this software is the criminal case that Z-Library is embroiled in. In November 2022, Z-Library lost access to over 200 domain names after two of its alleged operators were arrested in Argentina. Both defendants, who are Russian, retained US lawyers to fight their cases.

However, both Valeriia Ermakova – who has hired the services of Temkin & Associates – and Anton Napolsky – who is being represented by Brown Legal Consulting – have opted for lawyers who are fluent in Russian.

More recently, there’ve been more domain seizures. Last year, Z-Library approached the issue by initially denying that it had been targeted, but this time around the site’s operators were quick to confirm the action, directing users to alternative login screens through a Telegram message.

“Unfortunately, one of our primary login domains was seized today. Therefore, we recommend using the domain singlelogin.re to log in to your account, as well as to register,” the Z-Library team wrote.

Via TorrentFreak.com

Z-Library alternatives gain traffic

Although the shadow library has pushed ahead through recriminations, lost domain names send traffic to knockoffs and alternatives. Z-Library alternatives are now getting millions of visitors a month, but don’t have the same track-record as the original, putting users at risk of scams.

The shadow library also faced a huge bot attack earlier this month, causing technical issues: registrations stopped working and email delivery was interrupted. These are also likely factors in the development of the new desktop software.

The desktop launcher will be available for Mac, Windows and Linux platforms, and will automatically redirect users to the right place without relying on a single domain name. Previously, users accessed Z-Library through a dedicated URL that directed them to a ‘personal’ domain, which provided access to Z-Library.

The use of subdomains was working, but could easily have been wiped out by yet another round of domain seizures. The team announced that the launcher “will save you the trouble of searching for a working website link, as it will handle everything for you.”

Not only does it simplify access to Z-Library, but it can connect over the Tor network, which helps evade blocking efforts and adds another layer of privacy. Apparently, the software will likely trigger a notice that it’s from an unverified developer; Z-Library says this is standard, but of course users should treat third-party applications with caution.

The accessibility argument

It might be unexpected that the service would plough ahead during an active criminal case. In this respect, its reminiscent of how The Pirate Bay positioned itself years ago. The Z-Library team sees “free access to literature” as a main driver.

“The goal of Z-Library is to provide free access to literature to as many people in need as possible. Books are the scientific and cultural heritage of all humankind, and we strive to preserve this legacy and use its power for the benefit of our society.”

“We don’t promote piracy. The work of authors and publishers should be paid for and valued,” the Z-Library team explains, adding that it supports copyright legislation and doesn’t aim to change any laws.

However, free access to literature is paramount to many students’ studies, particularly in remote or underfunded areas.

Source: https://twitter.com/ApalaBhowmick/status/1572613014444212225?s=20

According to Bhowmick (the above tweet’s author), the Z-library shutdown is another pattern of racism and inequity that hinders promising young people from following their academic passions to futures in wealthier countries with more supported academic institutions. “There are scant ways of finding access to scholarly literature in India due to fractured print culture networks and limited incomes,” she said, “It’s almost a deliberate strategy to gate-keep academia from those who are racialized, or marginalized in other ways, especially in vulnerable economies in the world.”

Never underestimate the power of free access to literature.

The post Z-Library finds alternative ways to stay active appeared first on TechHQ.

]]>