Paige Gross, Author at Missouri Independent https://missouriindependent.com/author/paige-gross/ We show you the state Sun, 13 Oct 2024 20:01:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://missouriindependent.com/wp-content/uploads/2020/09/cropped-Social-square-Missouri-Independent-32x32.png Paige Gross, Author at Missouri Independent https://missouriindependent.com/author/paige-gross/ 32 32 As AI takes the helm of decision making, signs of perpetuating historic biases emerge https://missouriindependent.com/2024/10/11/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/ https://missouriindependent.com/2024/10/11/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/#respond Fri, 11 Oct 2024 18:01:26 +0000 https://missouriindependent.com/?p=22299

Studies show that AI systems used to make important decisions such as approval of loan and mortgage applications can perpetuate historical bias and discrimination if not carefully constructed and monitored (Seksan Mongkhonkhamsao/Getty Images).

In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.

With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”

White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.

The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.

But there’s also large potential for flawed training data, programming errors, and historically biased information to affect the outcomes, sometimes in detrimental, life-changing ways.

“There’s a potential for these systems to know a lot about the people they’re interacting with,” Bowen said. “If there’s a baked-in bias, that could propagate across a bunch of different interactions between customers and a bank.”

How does AI discriminate in finance?

Decision-making AI tools and large language models, like the ones in the Lehigh University experiment, are being used across a variety of industries, like healthcare, education, finance and even in the judicial system.

Most machine learning algorithms follow what’s called classification models, meaning you formally define a problem or a question, and then you feed the algorithm a set of inputs such as a loan applicant’s age, income, education and credit history, Michael Wellman, a computer science professor at the University of Michigan, explained.

The algorithm spits out a result — approved or not approved. More complex algorithms can assess these factors and deliver more nuanced answers, like a loan approval with a recommended interest rate.

Machine learning advances in recent years have allowed for what’s called deep learning, or construction of big neural networks that can learn from large amounts of data. But if AI’s builders don’t keep objectivity in mind, or rely on data sets that reflect deep-rooted and systemic racism, results will reflect that.

“If it turns out that you are systematically more often making decisions to deny credit to certain groups of people more than you make those wrong decisions about others, that would be a time that there’s a problem with the algorithm,” Wellman said. “And especially when those groups are groups that are historically disadvantaged.”

Bowen was initially inspired to pursue the Lehigh University study after a smaller-scale assignment with his students revealed the racial discrimination by the chatbots.

“We wanted to understand if these models are biased, and if they’re biased in settings where they’re not supposed to be,” Bowen said, since underwriting is a regulated industry that’s not allowed to consider race in decision-making.

For the official study, Bowen and a research team ran thousands of loan application numbers over several months through different commercial large language models, including OpenAI’s GPT 3.5 Turbo and GPT 4, Anthropic’s Claude 3 Sonnet and Opus and Meta’s Llama 3-8B and 3-70B.

In one experiment, they included race information on applications and saw the discrepancies in loan approvals and mortgage rates. In other, they instructed the chatbots to “use no bias in making these decisions.” That experiment saw virtually no discrepancies between loan applicants.

But if race data isn’t collected in modern day lending, and algorithms used by banks are instructed to not consider race, how do people of color end up getting denied more often, or offered worse interest rates? Because much of our modern-day data is influenced by disparate impact, or the influence of systemic racism, Bowen said.

Though a computer wasn’t given the race of an applicant, a borrower’s credit score, which can be influenced by discrimination in the labor and housing markets, will have an impact on their application. So might their zip code, or the credit scores of other members of their household, all of which could have been influenced by the historic racist practice of redlining, or restricting lending to people in poor and nonwhite neighborhoods.

Machine learning algorithms aren’t always calculating their conclusions in the way that humans might imagine, Bowen said. The patterns it is learning apply to a variety of scenarios, so it may even be digesting reports about discrimination, for example learning that Black people have historically had worse credit. Therefore, the computer might see signs that a borrower is Black, and deny their loan or offer them a higher interest rate than a white counterpart.

Other opportunities for discrimination 

Decision making technologies have become ubiquitous in hiring practices over the last several years, as application platforms and internal systems use AI to filter through applications, and pre-screen candidates for hiring managers. Last year, New York City began requiring employers to notify candidates about their use of AI decision-making software.

By law, the AI tools should be programmed to have no opinion on protected classes like gender, race or age, but some users allege that they’ve been discriminated against by the algorithms anyway. In 2021, the U.S. Equal Employment Opportunity Commission launched an initiative to examine more closely how new and existing technologies change the way employment decisions are made. Last year, the commission settled its first-ever AI discrimination hiring lawsuit.

The New York federal court case ended in a $365,000 settlement when tutoring company iTutorGroup Inc. was alleged to use an AI-powered hiring tool that rejected women applicants over 55 and men over 60. Two hundred applicants received the settlement, and iTutor agreed to adopt anti-discrimination policies and conduct training to ensure compliance with equal employment opportunity laws, Bloomberg reported at the time.

Another anti-discrimination lawsuit is pending in California federal court against AI-powered company Workday. Plaintiff Derek Mobley alleges he was passed over for more than 100 jobs that contract with the software company because he is Black, older than 40 and has mental health issues, Reuters reported this summer. The suit claims that Workday uses data on a company’s existing workforce to train its software, and the practice doesn’t account for the discrimination that may reflect in future hiring.

U.S. judicial and court systems have also begun incorporating decision-making algorithms in a handful of operations, like risk assessment analysis of defendants, determinations about pretrial release, diversion, sentencing and probation or parole.

Though the technologies have been cited in speeding up some of the traditionally lengthy court processes — like for document review and assistance with small claims court filings — experts caution that the technologies are not ready to be the primary or sole evidence in a “consequential outcome.”

“We worry more about its use in cases where AI systems are subject to pervasive and systemic racial and other biases, e.g., predictive policing, facial recognition, and criminal risk/recidivism assessment,” the co-authors of a paper in Judicature’s 2024 edition say.

Utah passed a law earlier this year to combat exactly that. HB 366, sponsored by state Rep. Karianne Lisonbee, R-Syracuse, addresses the use of an algorithm or a risk assessment tool score in determinations about pretrial release, diversion, sentencing, probation and parole, saying that these technologies may not be used without human intervention and review.

Lisonbee told States Newsroom that by design, the technologies provide a limited amount of information to a judge or decision-making officer.

“We think it’s important that judges and other decision-makers consider all the relevant information about a defendant in order to make the most appropriate decision regarding sentencing, diversion, or the conditions of their release,” Lisonbee said.

She also brought up concerns about bias, saying the state’s lawmakers don’t currently have full confidence in the “objectivity and reliability” of these tools. They also aren’t sure of the tools’ data privacy settings, which is a priority to Utah residents. These issues combined could put citizens’ trust in the criminal justice system at risk, she said.

“When evaluating the use of algorithms and risk assessment tools in criminal justice and other settings, it’s important to include strong data integrity and privacy protections, especially for any personal data that is shared with external parties for research or quality control purposes,” Lisonbee said.

Preventing discriminatory AI

Some legislators, like Lisonbee, have taken note of these issues of bias, and potential for discrimination. Four states currently have laws aiming to prevent “algorithmic discrimination,” where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. This includes Utah, as well as California (SB 36), Colorado (SB 21-169), Illinois (HB 0053).

Though it’s not specific to discrimination, Congress introduced a bill in late 2023 to amend the Financial Stability Act of 2010 to include federal guidance for the financial industry on the uses of AI. This bill, the Financial Artificial Intelligence Risk Reduction Act or the “FAIRR Act,” would require the Financial Stability Oversight Council to coordinate with agencies regarding threats to the financial system posed by artificial intelligence, and may regulate how financial institutions can rely on AI.

Lehigh’s Bowen made it clear he felt there was no going back on these technologies, especially as companies and industries realize their cost-saving potential.

“These are going to be used by firms,” he said. “So how can they do this in a fair way?”

Bowen hopes his study can help inform financial and other institutions in deployment of decision-making AI tools. For their experiment, the researchers wrote that it was as simple as using prompt engineering to instruct the chatbots to “make unbiased decisions.” They suggest firms that integrate large language models into their processes do regular audits for bias to refine their tools.

Bowen and other researchers on the topic stress that more human involvement is needed to use these systems fairly. Though AI can deliver a decision on a court sentencing, mortgage loan, job application, healthcare diagnosis or customer service inquiry, it doesn’t mean they should be operating unchecked.

University of Michigan’s Wellman told States Newsroom he’s looking for government regulation on these tools, and pointed to H.R. 6936, a bill pending in Congress which would require federal agencies to adopt the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework calls out potential for bias, and is designed to improve trustworthiness for organizations that design, develop, use and evaluate AI tools.

“My hope is that the call for standards … will read through the market, providing tools that companies could use to validate or certify their models at least,” Wellman said. “Which, of course, doesn’t guarantee that they’re perfect in every way or avoid all your potential negatives. But it can … provide basic standard basis for trusting the models.”

]]>
https://missouriindependent.com/2024/10/11/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/feed/ 0
Data, pilot projects showing food service robots may not threaten jobs https://missouriindependent.com/2024/10/02/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/ https://missouriindependent.com/2024/10/02/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/#respond Wed, 02 Oct 2024 15:23:24 +0000 https://missouriindependent.com/?p=22119

Fast-casual restaurant Chipotle is experimenting with a work station that has automation assembling salads and bowls underneath the counter while a human worker assembles more complex dishes such as burritos on top. (Photo courtesy of Chipotle)

Though food service workers and economists have long worried about the impact technology would have on the restaurant labor force, pilot programs in several fast-casual restaurants over the last few years have shown it may not have the negative impact they feared, a labor economist says.

Technology plays several roles in food service, but the industry has seen the adoption of touch screens, AI-powered ordering and food prep machines over the last few years. And even more recently, it’s become more likely that a robot is playing a part in your food preparation or delivery.

They may take shape as your bartender, your server or your food delivery driver, but many are like the “collaborative” robots just rolled out in some Chipotle restaurants in California.

The company is testing the Autocado, which splits and prepares avocados to be turned into guacamole by a kitchen crew member, and the Augmented Makeline, which builds bowls and salads autonomously underneath the food line while employees construct burritos, tacos and quesadillas on top. Chipotle said 65% of its mobile orders are for salads or bowls, and the Augmented Makeline’s aim is improving efficiency and digital order accuracy.

Fast casual restaurant Chipotle is using the “Autocado,” a machine that automatically produces the company’s guacamole. (Photo courtesy of Chipotle)

The company said it invested in robotics company Vebu and worked with them on the design for the Autocado, and it invested in food service platform Hyphen, which custom made the Augmented Makeline for Chipotle.

“Optimizing our use of these systems and incorporating crew and customer feedback are the next steps in the stage-gate process before determining their broader pilot plans,” Curt Garner, Chipotle’s chief customer and technology officer said in a statement.

The company said the introduction of these robots will not eliminate any jobs, as the crew members are supposed to have a “cobotic relationship” with them. The aim is that crew members will be able to spend more time on either food prep tasks or on providing hospitality to customers.

Ben Zipperer, a low-wage labor market economist at the Economic Policy Institute, said the early fears around automation and robots threatening jobs in the foodservice industry are not being realized. Automation has shown to make workers more productive and effective, he said.

Robots have also been shown to make businesses more efficient and profitable, Zipperer siad, which creates an “offsetting demand factor.” That increased demand and profitability can actually help keep the cost of food for customers more affordable, he added.

When one action is freed up by a robot, the restaurant has more freedom to place workers on other high-demand tasks.

“Either those workers are still going to help produce guacamole, because people want to buy more of it,” Zipperer said of the Chipotle announcement, “or there’s other things that that business is trying to produce but can’t allocate the labor towards, even though they have demand for it.”

Zipperer pointed toward automated food purchasing with the use of touchscreen kiosks, which has been widely adopted in fast food service. In these cases, workers get shifted away from cash registers and toward more back-of-house jobs like food prep or janitorial work.

McDonald’s shows an example of this. The fast food restaurant was one of the earliest adopters of touchscreen kiosks, with thousands of stores using the technology to collect orders by 2015, and screens becoming nearly ubiquitous by 2020.

Last week, the company said the kiosks actually produce extra work for staff, as customers tend to purchase more food than they would at a cash register. The machines have built-in upselling features that cashiers don’t always have time to push with customers, and the introduction of mobile ordering and delivery has created jobs that front-of-house staff are relegated to.

Many fast food CEOs have threatened that raising minimum wages across the U.S. would equate in job loss to autonomous machines and kiosks. And while some franchise owners may take that route, it’s not a trend across the whole country. Jobs at quick-service and fast casual restaurants were up about 150,000 jobs, or 3% above their pre-pandemic levels in August.

As technology takes more of a role in food service production, businesses that want to succeed will find the balance of cost-saving efficiencies and valued work by their employees, Zipperer said.

“As long as there is demand for what that business is producing, that will allow workers to not feel a lot of the negative effects of technology,” he said.

GET THE MORNING HEADLINES.

]]>
https://missouriindependent.com/2024/10/02/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/feed/ 0
Pollsters are turning to artificial intelligence this election season https://missouriindependent.com/2024/10/01/pollsters-are-turning-to-ai-this-election-season/ https://missouriindependent.com/2024/10/01/pollsters-are-turning-to-ai-this-election-season/#respond Tue, 01 Oct 2024 16:23:40 +0000 https://missouriindependent.com/?p=22125

As response rates drop, pollsters are increasingly turning to artificial intelligence to determine what voters are thinking ahead of Election Day, not only asking the questions but sometimes to help answer them. (Stephen Maturen/Getty Images)

Days after President Joe Biden announced he would not be seeking re-election, and endorsed Vice President Kamala Harris, polling organization Siena College Research Institute sought to learn how “persuadable” voters were feeling about Harris.

In their survey, a 37-year-old Republican explained that they generally favored Trump for his ability to “get [things] done one way or another.”

“Who do you think cares about people like you? How do they compare in terms of caring about people like you?” the pollster asked.

“That’s where I think Harris wins, I lost a lot of faith in Trump when he didn’t even contact the family of the supporter who died at his rally,” the 37-year-old said.

Pollsters pressed this participant and others across the political spectrum to further explain their stances, and examine the nuance behind choosing a candidate. The researchers saw in real time how voters may sway depending on the issue, and asked follow-up questions about their belief systems.

But the “persuadable” voters weren’t talking to a human pollster. They were conversing with an AI chatbot called Engage.

The speed in which election cycles move, coupled with a steep drop of people participating in regular phone or door-to-door polls, have caused pollsters to turn to artificial intelligence for insights, both asking the questions and sometimes even answering them

Why do we poll? 

The history of polling voters in presidential races goes back 200 years, to the 1824 race which ultimately landed John Quincy Adams in the White House. White men began polling each other at events leading up to the election, and newspapers began reporting the results, though they didn’t frame the results as predictive of the outcome of the election.

In modern times, polling for public opinion has become a business. Research centers, academic institutions and news conglomerates themselves have been conducting polls during election season for decades. Though their accuracy has limitations, the practice is one of the only ways to gauge how Americans may be thinking before they vote.

Polling plays a different role for different groups, said Rachel Cobb, an assistant professor of political science and legal studies at Suffolk University. For campaign workers, polling groups of voters helps provide insight into the issues people care about the most right now, and informs how candidates talk about those issues. It’s why questions at a presidential debate usually aren’t a surprise to candidates — moderators tend to ask questions about the highest-polling topics that week.

For news outlets, polls help give context to current events and give anchors numbers to illustrate a story. Constant polling also helps keep a 24-hour news cycle going.

And for regular Americans, poll results help them gauge where the race is, and either activate or calm their nerves, depending on if their candidate is polling favorably.

But Cobb said she, like many of her political science colleagues, has observed a drop in responses to more traditional style of polling. It’s much harder and more expensive for pollsters to do their job, because people aren’t answering their phones or their front doors.

“The time invested in getting the appropriate kind of balance of people that you need in order to determine accuracy has gotten greater and so and they’ve had to come up with more creative ways to get them,” Cobb said. “At the same time, our technological capacity has increased.”

How AI is assisting in polling?

The speed of information has increased exponentially with social media and 24-hour news cycles, and polls have had to keep up, too. Though they bring value in showing insights for a certain group of people, their validity is fleeting because of that speed, Cobb said. Results are truly only representative of that moment in time, because one breaking news story could quickly change public opinion.

That means pollsters have to work quickly, or train artificial intelligence to keep up.

Leib Litman, co-CEO and chief research officer of CloudResearch, which created the chatbot tool Engage, said AI has allowed them to collect answers so much faster than before.

“We’re able to interview thousands of people within a matter of a couple hours, and then all of that data that we get, all those conversations, we’re also able to analyze it, and derive the insights very, very quickly,” he said.

Engage was developed about a year ago and can be used in any industry where you need to conduct market research via interviews. But it’s become especially useful in this election cycle as campaigns attempt to learn how Americans are feeling at any given moment. The goal isn’t to replace human responses with AI, rather to use AI to reach more people, Litman said.

But some polling companies are skipping interviewing and instead relying on something called “sentiment analysis AI” to analyze publically available data and opinions. Think tank Heartland Forward recently worked with AI-powered polling group Aaru to determine the public perception of artificial intelligence.

The prediction AI company uses geographical and demographic data of an area and scrapes publicly available information, like tweets or voting records, to simulate respondents of a poll. The algorithm uses all this information to make assertions about how a certain demographic group may vote or how they may answer questions about political issues.

This type of poll was a first for Heartland Forward, and its executive vice president Angie Cooper said they paired the AI-conducted poll with in-person gatherings where they conducted more traditional polls.

“When we commissioned the poll, we didn’t know what the results were going to yield,” she said. “What we heard in person closely mirrored the poll results.”

Sentiment analysis

The Aaru poll is an example of sentiment analysis AI, which uses machine learning and large language models to analyze the meaning and tone behind text. It includes training an algorithm to not just understand literally what’s in a body of text, but also to seek out hidden messaging or context, like humans do in conversation.

The general public started interacting with this type of AI in about 2010, said Zohaib Ahmed, founder of Resemble AI, which specializes in voice generation AI. Sentiment analysis AI is the foundation behind search engines that can read a request and make recommendations, or to get your Alexa device to fulfill a command.

Between 2010 and 2020, though, the amount of information collected on the internet has increased exponentially. There’s so much more data for AI models to process and learn from, and technologists have taught it to process contextual, “between-the-lines” information.

The concept behind sentiment analysis is already well understood by pollsters, says Bruce Schneier, a security technologist and lecturer at Harvard University’s Kennedy School. In June, Schneier and other researchers published a look into how AI was playing a role in political polling. 

Most people think polling is just asking people questions and recording their answers, Schneier said, but there’s a lot of “math” between the questions people answer and the poll results.

“All of the work in polling is turning the answers that humans give into usable data,” Schneier said.

You have to account for a few things: people lie to pollsters, certain groups may have been left out of a poll, and response rates are overall low. You’re also applying polling statistics to the answers to come up with consumable data. All of this is work that humans have had to do themselves before technology and computing helped speed up the process.

In the Harvard research, Schneier and the other authors say they believe AI will get better at anticipating human responses, and knowing when it needs human intervention for more accurate context. Currently, they said, humans are our primary respondents to polls, and computers fill in the gaps. In the future, though, we’ll likely see AI filling out surveys and humans filling in the gaps.

“I think AI should be another tool in the pollsters mathematical toolbox, which has been getting more complex for the past several decades,” Schneier said.

Pros and cons of AI-assisted polling 

AI polling methods bring pollsters more access and opportunity to gauge public reaction. Those who have begun using it in their methodology said that they’ve struggled to get responses from humans organically, or they don’t have the time and resources to conduct in-person or telephone polling.

Being interviewed by an anonymous chatbot may also provide more transparent answers for controversial political topics. Litman said personal, private issues such as health care or abortion access are where their chatbot “really shines.” Women, in particular, have reported that they feel more comfortable sharing their true feelings about these topics when talking to a chatbot, he said.

But, like all methodology around polling, it’s possible to build flaws into AI-assisted polling.

The Harvard researchers ran their own experiment asking ChatGPT 3.5 questions about the political climate, and found shortcomings when it asked about U.S. intervention in the Ukraine war. Because the AI model only had access to data up through 2021, the answers missed all of the current context about Russia’s invasion beginning in 2022.

Sentiment analysis AI may also struggle with text that’s ambiguous, and it can’t be counted on for reviewing developing information, Ahmed said. For example, the X timeline following one of the two assassination attempts of Trump probably included favorable or supportive messages from politicians across the aisle. An AI algorithm might read the situation and conclude that all of those people are very pro-Trump.

“But it doesn’t necessarily mean they’re navigating towards Donald Trump,” Ahmed said. “It just means, you know, there’s sympathy towards an event that’s happened, right? But that event is completely missed by the AI. It has no context of that event occurring, per se.”

Just like phone-call polling, AI-assisted polling can also potentially leave whole groups of people out of surveys, Cobb said. Those who aren’t comfortable using a chatbot, or aren’t very active online will be excluded from public opinion polls if pollsters move most of their methods online.

“It’s very nuanced,” Ahmed said of AI polling. “I think it can give you a pretty decent, high-level look at what’s happening, and I guarantee that it’s being used by election teams to understand their position in the race, but we have to remember we exist in bubbles, and it can be misleading.”

Both the political and technology experts agreed that as with most other facets of our lives, AI has found its way into polling and we likely won’t look back. Technologists should aim to further train AI models to understand human sentiment, they say, and pollsters should continue to pair it with human responses for a fuller scope of public opinion.

“Science of polling is huge and complicated,” Schneier said. “And adding AI to the mix is another tiny step down a pathway we’ve been walking for a long time using, you know, fancy math combined with human data.”

GET THE MORNING HEADLINES.

]]>
https://missouriindependent.com/2024/10/01/pollsters-are-turning-to-ai-this-election-season/feed/ 0
Budget restrictions, staff issues and AI are threats to states’ cybersecurity https://missouriindependent.com/briefs/budget-restrictions-staff-issues-and-ai-are-threats-to-states-cybersecurity/ Mon, 30 Sep 2024 18:15:08 +0000 https://missouriindependent.com/?post_type=briefs&p=22139

A new survey of state chief information and security officers finds them better prepared to protect their networks from cyberattacks than four years earlier, but still worried about limited staff and resources (Bill Hinton/Getty Images).

Many state chief information and security officers say they don’t have the budget, resources, staff or expertise to feel fully confident in their ability to guard their government networks against cyber attacks, according to a new Deloitte & Touche survey of officials in all 50 states and D.C.

“The attack surface is expanding as state leaders’ reliance on information becomes increasingly central to the operation of government itself,” said Srini Subramanian, principal of Deloitte & Touche LLP and the company’s global government and public services consulting leader. “And CISOs have an increasingly challenging mission to make the technology infrastructure resilient against ever-increasing cyber threats.”

The biennial cybersecurity report, released today, outlined where new threats are coming from, and what vulnerabilities these teams have.

Governments are relying more on servers to store information, or transmit it through the Internet of Things, or connected sensor devices. Infrastructure for systems like transit and power is also heavily reliant on technology, and all of the connected online systems create more opportunities for attack.

The emergence of AI is also creating new ways for bad actors to exploit vulnerabilities, as it makes phishing scams and audio and visual deep fakes easier.

Deloitte found encouraging data that showed the role of state chief information and security officer has been prioritized in every state’s government tech team, and that statutes and legislation have been introduced in some states which give CISOs more authority.

In recent years, CISOs have taken on the vast majority of security management and operations, strategy, governance, risk management and incident response for their state, the report said.

But despite the growing weight on these roles, some of the CISOs surveyed said they do not have the resources needed to feel confident in their team’s ability to handle old and new cybersecurity threats.

Nearly 40% said they don’t have enough funds for projects that comply with regulatory or legal requirements, and nearly half said they don’t know what percent of their state’s IT budget is for cybersecurity.

Talent was another issue, with about half of CISOs saying they lacked cybersecurity staffing, and 31% saying there was an “inadequate availability” of professionals to complete these jobs. The survey does show that CISOs reported better staff competencies in 2024 compared to 2020, though.

Staffing of CISOs themselves, due to burnout, has been an increasing issue since the pandemic, the report found. Since the 2022 survey, Deloitte noted that nearly half of all states have had turnover in their chief security officers, and the median tenure is now 23 months, down from 30 months in the last survey.

When it came to generative AI, CISOs seemed to see both the opportunities and risks. Respondents listed generative AI as one of the newest threats to cybersecurity, with 71% saying they believe it poses a “high” threat; 41% of respondents said they don’t have confidence in their team to be able to handle them.

While they believe AI is a threat, many teams also reported using the technology to improve their security operations. Twenty one states are already using some form of AI, and 22 states will likely begin using it in the next year. As with with state legislation around AI, it’s being looked at on a case-by-case basis.

One CISO said in the report their team is “in discovery phase with an executive order to study the impact of gen AI on security in our state” while another said they have “established a committee that is reviewing use cases, policies, procedures, and best practices for gen AI.”

CISOs face these budgetary and talent restrictions while they aim to take on new threats and secure aging technology systems that leave them vulnerable.

The report laid out some tactics tech departments could use to navigate these challenges, including leaning on government partners, working creatively to boost budgets, diversifying their talent pipeline, continuing the AI policy conversations and promoting the CISOs role in digital transformation of government operations.

]]>
Governments often struggle with massive new IT projects https://missouriindependent.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/ https://missouriindependent.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/#respond Fri, 30 Aug 2024 14:00:17 +0000 https://missouriindependent.com/?p=21672

Government requirements and culture can make upgrading aging computer systems difficult, experts say (Getty Images).

Idaho’s state government was facing a problem.

In 2018, its 86 state agencies were operating with a mix of outdated, mismatched business systems that ran internal processes like payroll and human resources. Some of the programs dated back to the 1980s, and many were written in programming languages they don’t teach in engineering schools anymore.

The state made a clear choice — one many other state and city governments have made in recent years — they overhauled their entire IT suite with one cloud-based software.

But since the $121 million project, called Luma, rolled out in July 2023, things have not gone as planned.

Luma has created procedural and data errors and caused “disruptions in day-to-day processes and [is] impacting overall productivity,” said an audit that was provided to legislators in June.

Five months into its launch last year, the Luma project was still receiving criticism from employees, organizations that work with the state’s government agencies and from top state legislators.

Speaker of the Idaho House of Representatives Mike Moyle said in a November 2023 Legislative Council meeting that the state might want to come up with an exit plan for the platform — “No offense, this thing is a joke and it’s not working,” he told legislators.

Idaho’s Luma project is just one of many government IT overhauls that hasn’t gone as smoothly as city and state officials may have aimed for.

As few as 13% of large government IT projects succeed, a field guide by the U.S. General Services Administration’s 18F team said. The group of designers, software engineers, strategists and product managers work within the GSA to help government agencies buy and build tech products.

State projects, the org’s report says, can face the most challenges because state departments often don’t have sufficient knowledge about modern software development and their procurement procedures can be outdated for what’s needed to properly vet huge software solutions.

“Every year, the federal government matches billions of dollars in funding to state and local governments to maintain and modernize IT systems used to implement federal programs such as Medicaid, child welfare benefits, housing, and unemployment insurance,” 18F’s State Software Budgeting Handbook said. “Efforts to modernize those legacy systems fail at an alarmingly high rate and at great cost to the federal budget.”

Why are governments overhauling long-standing IT systems?

Most of the time, as in the case of Idaho, a state is seeking to overhaul a series of aging, inflexible and ineffective systems with one more modernized approach.

Each year, governments need to budget and allocate resources to maintain existing systems and to get them to work with other business operation systems. In 2019, 80% of the $90 billion federal IT spending budget went toward maintenance of legacy software.

Giant projects, like Washington state’s proposed $465 million replacement program of its legacy systems, may likely be replacing the millions spent every year to keep up old systems.

Aging software systems aren’t just awkward or inefficient to use, but they can also pose cybersecurity risks. Departments that use systems built with older programming languages that are going out of style will struggle to find employees who can maintain them, experts say. Departments might also struggle to get newer business systems to integrate with older ones, which causes the potential for hiccups in operation.

A closer look at Luma 

Idaho’s State Controller’s Office found itself in that position six years ago when it sought to overhaul all its business operation systems. Scott Smith, the chief deputy controller, and project manager of Luma, said they were trying to maintain systems that they were losing technical support for.

Each agency had built their own homegrown system, or had procured their own up until that point. There was a desire to modernize operations statewide and do an audit on return on investment for taxpayers. The project got the name Luma, an attempt for the state to “enlighten, or shine a light on” its existing systems and update them, Smith said.

After a procurement process, the state chose enterprise resource planning software company Infor, and replaced a collection of separate systems that ran payroll, budgets, financial management and human resources with one cloud-based solution. Many of these legacy systems dated back to 1987 and 1988, and were becoming vulnerable to security threats, Smith said.

Reports by the Idaho Capital Sun found that since its rollout last summer, the new system didn’t correctly distribute $100 million in interest payments to state agencies, it double paid more than $32 million in Idaho Department of Health and Welfare payments, and it created payroll issues or delays for state employees. A nonprofit that works with the state said it wasn’t paid for months, and only received payments when they sought attention from state legislators and local media, and upon launch day in July 2023, only about 50% of employees had completed basic training on the system.

In February, Moyle and a bipartisan group of eight legislators asked an independent, nonpartisan state watchdog agency called the Office of Performance Evaluations to look into Luma’s software. And in June, a Legislative Services Audit found system lacked a range of information technology controls for data validation and security.

The performance evaluation report isn’t due until October, but Ryan Langrill, interim director of the OPE, said in August that they were told to make the Luma study its priority.

“Our goal is to identify what went well and what didn’t and to offer recommendations for future large scale IT projects,” Langrill said.

Smith told States Newsroom that with any large-scale IT project, there’s always going to be difficulties during the first year of implementation. Idaho is the first to do a rollout of this kind, where all business processes went live at once in a multi-cloud environment, he said.

They developed requirements for the system for several years before its rollout last year and spent time in system integration testing with experts from Infor.

“Once you put it into the real world, right? There’s still a lot for you to understand,” Smith said. “And while the system itself can provide you the functionality, there’s still a lot of inherent business processes that need to be adapted to the new system.”

Each agency had to evaluate their own internal processes, Smith said. Large-scale departments like military, transportation and health and human services are going to operate differently than smaller ones like libraries and the historical society. Trying to provide a singular system to support each facet of government is going to come with its challenges, he said.

Human error has also likely played a role in the rollout, Smith said. As employees have to learn the new system and make changes to years-long processes, they’ll have to take time to change, adjust, refine and improve.

Smith said he hopes the Office of Performance Evaluations looks at the Luma project with a “holistic” approach, going back to source selections and analyzing what could have been done better with everything from implementation to the development of requirements for the technology.

“We’ll obviously look at those results and see where we can make improvements, but it can also be used, I hope, as a source document for others…” Smith said. “Every state’s going through a system modernization effort, that they can use to help improve their potential for success in their projects.”

Other challenging rollouts 

A similar situation is brewing in Maine with the rollout of its child welfare system, called Katahdin — named after a mountain in Baxter State Park.

The state sought to overhaul its child welfare database used by the Office of Child and Family Services back in 2019 when its older system began losing functionality, the Maine Morning Star reported. It aimed to “modernize and improve” technical support for staff that work with families, and the department received eight proposals from software companies in 2021, but only three met eligibility criteria.

The state ultimately chose Deloitte, and spent nearly $30 million on the project, which went live in January 2022. But employees say their workflow hasn’t been as effective since.

Caseworkers have described it as cumbersome, saying they need to use dozens of steps and duplicative actions just to complete a single task, and that files saved in the system later go missing. It’s additional stress on a department that faces staff vacancies and long waitlists to connect families with resources, the Maine Morning Star reported in March.

In her annual report in 2023, Christine Alberi, the state’s child welfare ombudsman, wrote “Katahdin is negatively affecting the ability of child welfare staff to effectively do their work, and therefore keep children safe.”

Katahdin, too, received recommendations from a bipartisan oversight committee to improve the system earlier this year. Recommendations included factors beyond just the software, like improvements to the court system, recruiting more staff and addressing burnout.

States Newsroom sought to determine if any of the recommendations had been implemented, and to confirm that the department was still using Katahdin, but the department did not return a request for comment.

A fall 2023 report shows that California has also struggled with the maintenance of its statewide financial system that performs budgeting, procurement, cash management and accounting functions. The program, called FI$Cal, has cost about $1 billion since it began in 2005, and last fall State Auditor Grant Parks said that despite two decades of effort, “many state entities have historically struggled to use the system to submit timely data for the [Annual Comprehensive Financial Report].”

The state, which is famously home to tech capital Silicon Valley, has its own department of technology, which oversees the strategic vision and planning of the state’s tech strategy. But the department landed on the Auditor’s “high risk” list in 2023, with Parks saying the department has not made sufficient progress on its tech projects.

Government v. corporate tech rollouts

When a government rolls out a new software system, two things are happening, says Mark Wheeler, former chief information officer of the City of Philadelphia. First, they’re replacing a system that’s been around for decades, and second, they’re introducing workers to technology that they may see in their private lives, but aren’t used to operating in a government setting.

Sometimes, he said, governments spend a lot of time planning for the day a system goes live, but don’t think about the long learning curve afterward. They spend years defining functionality and phases of a product, but they don’t designate the real resources needed for “change management,” or the capacity for teams to engage with technologists and become a part of the transition to using the new technology.

Wheeler suggests that departments train new hires in advance of a rollout so certain people can fully focus on the technology transition. Learning these new technologies and building new internal processes can become “a full time job” of its own, Wheeler said. The people who are touch-points for their department with the new systems will also need to form relationships with the software companies they’ve chosen to ease the transition.

Huge software rollouts call follow either an “agile” or “waterfall” approach — agile focuses on continuous releases that incorporate customer feedback, while waterfall has a clearly defined series of phases, and one phase must reach completion before others start.

“We get this message over and over again that government needs to operate like a business, and therefore all of our major technology transformations need to operate in this agile format,” Wheeler said. “Well, if you don’t properly train people and introduce them to agile and create the capacity for them to engage in those two week sprints, that whole agile process starts to fall apart.”

Another way these tech transformations differ between private and public sectors is that there are often project managers at private corporations who oversee the many facets of a project and “own” it from start to finish. Between constant iteration on its improvement, thinking about its long term health, the care and growth of a project, Wheeler says, corporations tend to invest in more people to see transitions through.

Wheeler acknowledged that it can be frustrating for residents to see huge budgets dedicated to government projects that take time to come to fruition and to work smoothly. But his main advice to state or city governments that are on the precipice of a huge change is to invest in the change management teams. When a government is spending potentially hundreds of millions of dollars on a new solution, the tiny budget line of some additional personnel can make or break the success of a project.

And finally, Wheeler says, governments and residents should keep in mind the differing expectations and priorities, between private and public sectors when comparing them.

Tech transformations at large companies are mostly about meeting a bottom line and return on investment, while governments are responsible for the health and safety and stability of their societies. They also require the feedback and inclusion of many, many stakeholders and due process procedures, Wheeler said, and they have to be transparent about their decision-making.

Governments also just aren’t known to be super great with change, he said.

“As much as the public says they want government to move quickly, when you propose a very big change, suddenly everyone wants to question it and make sure that they have their say in the process,” Wheeler said. “And that includes technology pieces so that will slow it all down.”

]]>
https://missouriindependent.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/feed/ 0
Americans’ perception of AI is generally negative, though they see ‘beneficial applications’ https://missouriindependent.com/briefs/americans-perception-of-ai-is-generally-negative-though-they-see-beneficial-applications/ Wed, 28 Aug 2024 20:53:21 +0000 https://missouriindependent.com/?post_type=briefs&p=21654

A new poll of Americans across nine states by Heartland Forward finds that Americans are generally wary of artificial intelligence but are more positive about the potential in specific economic sectors (Getty Images).

A vast majority of Americans feel negatively about artificial intelligence and how it will impact their futures, though they also report they don’t fully understand how and why the technology is currently being used.

The sentiments came from a survey conducted this summer by think tank Heartland Forward, which used Aaru, an AI-powered polling group that uses news and social media to generate respondents.

The poll sought to learn about the perceptions of AI for Americans across different racial, gender and age groups in Alabama, Illinois, Indiana, Louisiana, Michigan, North Dakota, Ohio, Oklahoma and Tennessee. Heartland Forward also held in-person dinners in Fargo, North Dakota and Nashville, Tennessee to collect sentiments.

While more than 75% of respondents reported that they feel skeptical, scared or overall negatively about AI, they reported more positive feelings when they learned about specific uses in industries like healthcare, agriculture and manufacturing.

Many of the negative feelings were about AI and work, with 83% of respondents saying they think it could negatively impact their job opportunities or career paths. Those respondents said they feel anxious about AI in their industries, and nearly 53% said they feel they should get AI training in the workplace. Louisiana respondents showed the highest level of concerns for job opportunities (91%), with Alabama showing highest levels of workplace anxiety (90%).

Respondents also had huge doubts about AI’s ethical capabilities and data protection, with 87% saying they don’t think AI can make unbiased ethical decisions, and 89% saying it doesn’t have the ability to safeguard privacy.

But when the pollsters told respondents about specific AI uses in healthcare, agriculture, manufacturing, education, transportation, finance and entertainment, they got positive responses. The majority of respondents believe AI can have “beneficial applications” across numerous industries.

Nearly 79% of respondents felt AI could have a moderate or positive impact on healthcare, 77% said so about agriculture, manufacturing and education, 80% said so about transportation, 73% said so about finance and 70% said so about entertainment.

Very strong positive feelings about AI were less common, but some states stood out, seeing applications in dominant local industries. North Dakota showed more interest than others when it came to agriculture, with 35% of people seeing “very high” potential, compared to 19% in Oklahoma and 18% in Louisiana.

“It really shows us that one, education is important, and that two, we need to bring the right people around the table to talk about it,” said Angie Cooper, executive vice president of Heartland Forward.

The negative and positive sentiments recorded by the poll found very little variation between the gender, age and racial groups. The negative sentiments of AI’s impact on society were held across the entire political spectrum, too, Cooper said.

Another uniting statistic was that at least 93% of respondents believe that it’s at least “moderately important” for governments to regulate AI.

Cooper said that during the organization’s dinners in Fargo and Nashville — which brought investors, entrepreneurs, business owners and policymakers together — it was clear that people had some understanding of how AI was being used in their sector, but they weren’t aware of policies and regulations introduced at the state level.

Though there’s no federal AI legislation, so far this year, 11 new states have enacted laws about how to use, regulate or place checks and balances on AI. There are now 28 states with AI legislation.

“The data shows, and the conversations that we’ve had in Fargo and Nashville really are around that there’s still a lack of transparency,” Cooper said. “And so they believe policy can help play a role there.”

]]>
Where exactly are all the AI jobs? https://missouriindependent.com/briefs/where-exactly-are-all-the-ai-jobs/ Wed, 21 Aug 2024 19:23:35 +0000 https://missouriindependent.com/?post_type=briefs&p=21582

The welcome screen for the OpenAI “ChatGPT” app is displayed on a laptop screen in a photo illustration (Leon Neal/Getty Images).

The desire for artificial intelligence skills in new hires has exploded over the last five years, and continues to be a priority for hiring managers across nearly every industry, data from Stanford University’s annual AI Index Report found.

In 2023, 1.6% of all United States-based jobs required AI skills, a slight dip from the 2% posted in 2022. The decrease comes after many years of growing interest in artificial intelligence, and is likely attributed to hiring slowdowns, freezes or layoffs at major tech companies like AmazonDeloitte and Capital One in 2023, the report said.

The numbers are still greatly up from just a few years ago, and in 2023, thousands of jobs across every industry required AI skills.

What do those AI jobs look like? And where are they based, exactly?

Generative AI skills, or the ability to build algorithms that produce text, images or other data when prompted, were sought after most, with nearly 60% of AI-related jobs requiring those skills. Large language modeling, or building technology that can generate and translate text, was second in demand, with 18% of AI jobs citing the need for those skills.

Those skills were followed by ChatGPT knowledge, prompt engineering, or training AI, and two other specific machine learning skills.

The industries that require these skills run the gamut — the information industry ranked first with 4.63% of jobs while professional, scientific and technical services came in second with 3.33%. The financial and insurance industries followed with 2.94%, and manufacturing came in fourth with 2.48%.

Public administration jobs, education jobs, management and utilities jobs all sought AI skills in 1- 2% of their open roles, while agriculture, mining, wholesale trade, real estate, transportation, warehousing, retail trade and waste management sought AI skills in 0.4-0.85% of their jobs.

(Stanford University graphic)

Though AI jobs are concentrated in some areas of the country, nearly every U.S. state had thousands of AI-specific jobs in 2023, the report found.

California — home to Silicon Valley — had 15.3%, or 70,630 of the country’s AI-related jobs posted in 2023. It was followed by Texas at 7.9%, or 36,413 jobs. Virginia was third, with 5.3%, or 24,417 of AI jobs.

Based on population, Washington state had the highest percentage of people in AI jobs, with California in second, and New York in third.

Montana, Wyoming and West Virginia were the only states with fewer than 1,000 open roles requiring AI, but because of population sizes, AI jobs still made up 0.75%, 0.95% and 0.46% of all of the state’s open roles last year.

Though the number of jobs dipped from 2022 to 2023, the adoption of AI technologies across business operations has not. In 2017, 20% of businesses reported that they had begun using AI for at least one function of their work. In 2022, 50% of businesses said they had, and that number reached 55% in 2023.

For those that have incorporated AI tools into their businesses, it’s making their workers more productive, the report found. The report said studies have shown that AI tools have allowed workers to complete tasks more quickly and have improved the quality of their work. The research suggested that AI could be also capable of upskilling workers, the report found.

The report acknowledges that with all the technological advances that the AI industry has seen in the last five years, there are still many unknowns. The U.S. is still awaiting federal AI legislation, while states make their own regulations and laws.

The Stanford report predicts two futures for the trajectory of the technology — one in which the technology continues to develop and increase productivity, but there’s a possibility that it’s used for “good and bad uses.” In another future, without proper research and development, the adoption of AI technologies could be constrained, researchers said.

“They are stepping in to encourage the upside,” the report said of government bodies. “Such as funding university R&D and incentivizing private investment. Governments are also aiming to manage the potential downsides, such as impacts on employment, privacy concerns, misinformation, and intellectual property rights.”

]]>
AI will play a role in election misinformation. Experts are trying to fight back https://missouriindependent.com/2024/08/16/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/ https://missouriindependent.com/2024/08/16/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/#respond Fri, 16 Aug 2024 13:00:43 +0000 https://missouriindependent.com/?p=21519

The rapid advancement of artificial intelligence technology has made it easier to create believable but totally fake videos and images and spread misinformation about elections, experts say (Tero Vesalainen/Getty Images).

In June, amid a bitterly contested Republican gubernatorial primary race, a short video began circulating on social media showing Utah Gov. Spencer Cox purportedly admitting to fraudulent collection of ballot signatures.

The governor, however, never said any such thing and courts have upheld his election victory.

The false video was part of a growing wave of election-related content created by artificial intelligence. At least some of that content, experts say, is false, misleading or simply designed to provoke viewers.

AI-created likenesses, often called “deepfakes,” have increasingly become a point of concern for those battling misinformation during election seasons. Creating deepfakes used to take a team of skilled technologists with time and money, but recent advances and accessibility in AI technology have meant that nearly anyone can create convincing fake content.

“Now we can supercharge the speed and the frequency and the persuasiveness of existing misinformation and disinformation narratives,” Tim Harper, senior policy analyst for democracy and elections at the Center for Democracy and Technology, said.

AI has advanced remarkably since just the last presidential election in 2020, Harper said, noting that OpenAI’s release of ChatGPT in November 2022 brought accessible AI to the masses.

About half of the world’s population lives in countries that are holding elections this year. And the question isn’t really if AI will play a role in misinformation, Harper said, but rather how much of a role it will play.

How can AI be used to spread misinformation?

Though it is often intentional, misinformation caused by artificial intelligence can sometimes be accidental, due to flaws or blindspots baked into a tool’s algorithm. AI chatbots search for information in the databases they have access to, so if that information is wrong, or outdated, it can easily produce wrong answers.

OpenAI said in May that it would be working to provide more transparency about its AI tools during this election year, and the company endorsed the bipartisan Protect Elections from Deceptive AI Act, which is pending in Congress.

“We want to make sure that our AI systems are built, deployed, and used safely,” the company said in the May announcement. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

Poorly regulated AI systems can lead to misinformation. Elon Musk was recently called upon by several secretaries of state after his AI search assistant Grok, built for social media platform X, falsely told users Vice President Kamala Harris was ineligible to appear on the presidential ballot in nine states because the ballot deadline had passed. The information stayed on the platform, and was seen by millions, for more than a week before it was corrected.

“As tens of millions of voters in the U.S. seek basic information about voting in this major election year, X has the responsibility to ensure all voters using your platform have access to guidance that reflects true and accurate information about their constitutional right to vote,” reads the letter signed by the secretaries of state of Washington, Michigan, Pennsylvania, Minnesota and New Mexico.

Generative AI impersonations also pose a new risk to the spread of misinformation.  In addition to the fake video of Cox in Utah, a deepfake video of Florida Governor Ron DeSantis falsely showed him dropping out of the 2024 presidential race.

Some misinformation campaigns happen on huge scales like these, but many others are more localized, targeted campaigns. For instance, bad actors may imitate the online presence of a neighborhood political organizer, or send AI-generated text messages to listservs in certain cities. Language minority communities have been harder to reach in the past, Harper said, but generative AI has made it easier to translate messages or target specific groups.

While most adults are aware that AI will play a role in the election, some hyperlocal, personalized campaigns may fly under the radar, Harper says.

For example, someone could use data about local polling places and public phone numbers to create messages specific to you. They may send a text the night before election day saying that your polling location has changed from one spot to another, and because they have your original polling place correct, it doesn’t seem like a red flag.

“If that message comes to you on WhatsApp or on your phone, it could be much more persuasive than if that message was in a political ad on a social media platform,” Harper said. “People are less familiar with the idea of getting targeted disinformation directly sent to them.”

Verifying digital identities 

The deepfake video of Cox helped spur a partnership between a public university and a new tech platform with the goal of combating deepfakes in Utah elections.

From July 2024, through Inauguration Day in January 2025, students and researchers at the  Gary R. Herbert Institute for Public Policy and the Center for National Security Studies at Utah Valley University will work with SureMark Digital. Together, they’ll verify digital identities of politicians to study the impact AI-generated content has on elections.

Through the pilot program, candidates seeking one of Utah’s four congressional seats and the open senate seat will be able to authenticate their digital identities at no cost through SureMark’s platform, with the goal of increasing trust in Utah’s elections.

Brandon Amacher, director of the Emerging Tech Policy Lab at UVU, said he sees AI playing a similar role in this election as the emergence of social media did in the 2008 election — influential but not yet overwhelming.

“I think what we’re seeing right now is the beginning of a trend which could get significantly more impactful in future elections,” Amacher said.

In the first month of the pilot, Amacher said, the group has already seen how effective these simulated video messages can be, especially in short-form media like TikTok and Instagram Reels. A shorter video is easier to fake, and if someone is scrolling these platforms for an hour, a short clip of misinformation likely won’t get very much scrutiny, but it could still influence your opinion about a topic or a person.

SureMark Chairman Scott Stornetta explained that the verification platform, which rolled out in the last month, allows a user to acquire a credential. Once that’s approved, the platform goes through an authorization process of all of your published content using cryptographic techniques that bind the identity of a person to the content that features them. A browser extension then identifies to users if content was published by you or an unauthorized actor.

The platform was created with public figures in mind, especially politicians and journalists who are vulnerable to having their images replicated. Anyone can download the SureMark browser extension to see accredited content across different media platforms, not just those that get accredited. Stornetta likened the technology to an X-ray.

“If someone sees a video or an image or listens to a podcast on a regular browser, they won’t know the difference between a real and a fake,” he said. “But if someone that has this X-ray vision sees the same documents in their browser, they can click on a button and basically find out whether it’s a green check or red X.”

The pilot program is currently working to credential the state’s politicians, so it will be a few months before they start to glean results, but Justin Jones, the executive director of the Herbert Institute, said that every campaign they’ve connected with has been enthusiastic to try the technology.

“All of them have said we’re concerned about this and we want to know more,” Jones said.

What’s the motivation behind misinformation?

Lots of different groups with varying motivations can be behind misinformation campaigns, Michael Kaiser, CEO of Defending Digital Campaigns, told States Newsroom.

There is sometimes misinformation directed at specific candidates, like in the case of Governors Cox and DeSantis’ deepfake videos. Campaigns around geopolitical events, like wars, are also common to sway public opinion.

Russia’s influence on the 2016 and 2020 elections is well-documented, and efforts will likely continue in 2024, with a goal of undermining U,S, support of Ukraine, a Microsoft study recently reported.

There’s sometimes a monetary motivation to misinformation, Amacher said, as provocative, viral content can turn into payouts on platforms that pay users for views.

Kaiser, whose work focuses on providing cybersecurity tools to campaigns, said that while interference in elections is sometimes the goal, more commonly, these people are trying to cause a general sense of chaos and apathy toward the elections process.

“They’re trying to divide us at another level,” he said. “For some bad actors, the misinformation and disinformation is not about how you vote. It’s just that we’re divided.”

It’s why much of the AI-generated content is inflammatory or plays on your emotions, Kaiser said.

“They’re trying to make you apathetic, trying to make you angry, so maybe you’re like, ‘I can’t believe this, I’m going to share it with my friends,’” he said. “So you become the platform for misinformation and disinformation.”

Strategies for stopping the spread of misinformation 

Understanding that emotional response and eagerness to share or engage with the content is a key tool to slowing the spread of misinformation. If you’re in that moment, there’s a few things you can do, the experts said.

First, try to find out if an image or sound bite you’re viewing has been reported elsewhere. You can use reverse image search on Google to see if that image is found on reputable sites, or if it’s only being shared by social media accounts that appear to be bots. Websites that fact check manufactured or altered images may point you to where the information originated, Kaiser said.

If you’re receiving messages about election day or voting, double check the information online through your state’s voting resources, he added.

Adding two-factor authentication on social media profiles and email accounts can help ward off phishing attacks and hacking, which can be used to spread  misinformation, Harper said.

If you get a phone call you suspect may be AI-generated, or is using someone’s voice likeness, it’s good to confirm that person’s identity by asking about the last time you spoke.

Harper also said that there’s a few giveaways to look out for with AI-generated images, like an extra finger or distorted ear or hairline. AI has a hard time rendering some of those finer details, Harper said.

Another visual clue, Amacher said, is that deepfake videos often feature a blank background, because busy surroundings are harder to simulate.

And finally, the closer we are to the election, the likelier you are to see misinformation, Kaiser said. Bad actors use proximity to the election to their advantage — the closer you are to election day, the less time your misinformation has to be debunked.

Technologists themselves can take some of the onus of misinformation in the way they build AI, Harper said. He recently published a summary of recommendations for AI developers with suggestions for best practices.

The recommendations included refraining from releasing text-to-speech tools that allow users to replicate the voices of real people, refraining from the generation of realistic images and videos of political figures and prohibiting the use of generative AI tools for political ads.

Harper suggests that AI tools disclose how often a chatbot’s training data is updated relating to election information, develop machine-readable watermarks for content and promote authoritative sources of election information.

Some tech companies already voluntarily follow many of these transparency best practices, but much of the country is following a “patchwork” of laws that haven’t developed at the speed of the technology itself.

bill prohibiting the use of deceptive AI-generated audio or visual media of a federal candidate was introduced in congress last year, but it has not been enacted. Laws focusing on AI in elections have been passed on a state level in the last two years, though, and primarily either ban messaging and images created by AI or at least require specific disclaimers about the use of AI in campaign materials.

But for now, these young tech companies that want to do their part in stopping or slowing the spread of misinformation can seek some direction from the CDT report or pilot programs like UVU’s.

“We wanted to take a stab at creating a kind of a comprehensive election integrity program for these companies,” Harper said. “understanding that unlike the kind of legacy social media companies, they’re very new and quite young and have no time or kind of the regulatory scrutiny required to have created strong election integrity policies in a more systematic way.”

]]>
https://missouriindependent.com/2024/08/16/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/feed/ 0
IT glitch causing delays in flights, business operations globally https://missouriindependent.com/briefs/it-glitch-causing-delays-in-flights-business-operations-globally/ Fri, 19 Jul 2024 15:58:00 +0000 https://missouriindependent.com/?post_type=briefs&p=21163

Long queues of passengers form at the check-in counters at Ninoy Aquino International Airport, amid a global IT disruption caused by a Microsoft outage and a Crowdstrike IT problem, on July 19, 2024 in Manila, Philippines. A significant Microsoft outage impacted users globally, leading to widespread disruptions, including cancelled flights and disruptions at retailers globally. Airlines like American Airlines and Southwest Airlines reported difficulties with their systems, which rely on Microsoft services for operations. The outage affected check-in processes and other essential functions, causing frustration among travellers and lines to back up at many affected airports worldwide (Ezra Acayan/Getty Images).

Air travel, banking, media and hospital systems are just some of the industries affected by a bug in a software update that has scrambled business operations for many globally Friday morning.

Many of those who use Microsoft Windows are likely experiencing a “blue screen of death” or an error page. The issue is due to a single bug in a software update from cybersecurity company CrowdStrike, which provides antivirus software for Microsoft users.

The company pushed out an update to the software overnight, and at 1:30 a.m. EST, CrowdStrike said its “Falcon Sensor” software was causing Microsoft Windows to crash and display a blue screen, Reuters reported.

CrowdStrike President and CEO George Kurtz released a statement early Friday morning on X, saying that the incident was not a security concern or a cyberattack. He added that the issue has been identified and that the company has been deploying a fix.

“We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website,” Kurtz said.

The bug was causing major delays and cancellations at airports across the globe. Flight tracking data site FlightAware noted nearly 24,000 delays and 2,300 cancellations globally by 9:30 a.m. Friday. While some airlines have been able to resume operation of their digital systems, others are finding analogue solutions in the meantime.

The U.S. Department of Transportation said it was monitoring the situation and suggested those experiencing travel delays and cancellations to use its FlightRights.gov website to help navigate their delays in travel.

Some states’ 911 and non-emergency lines were experiencing issues, including AlaskaVirginia and New Jersey.

New Jersey Governor Phil Murphy released a statement early Friday morning saying that the state had activated its State Emergency Operations Center in response to the disruptions and has provided guidance to other agencies about how to work through the situation.

“We are also engaging county and local governments, 911 call centers, and utilities to assess the impact and offer our assistance.,” he said.

Microsoft released a trouble shooting guide on X early Friday morning.

By 10 a.m. Friday, some global companies were seeing relief in their outages. Downdetector, which tracks real-time outages, showed companies like Visa, Zoom, UPS and Southwest Airlines gaining more normal operations than they were experiencing in the early morning hours.

Speaking to the hosts of Today this morning, Kurtz said he was “deeply sorry for the impact we’ve caused to customers, to travelers, to anyone affected.” He said some customers have been able to reboot and are seeing progress getting online, and that trend will likely continue throughout the day.

(This is a developing story)

]]>