No menu items!

Media Talks Oxford study says AI can distinguish who is swiping or typing in lowercase 02/07/2022 22:40

Share This Post

- Advertisement -

London – Discrimination by systems using artificial intelligence (AI) is already well documented by technologists, with, for example, a few “racist” police recordings or streaming platform cases suggesting that there are far more men than women artists.

But a new study from the University of Oxford reveals the extent of algorithmic bias based on simple user actions, such as the type of browser they use or the way they fill out forms.

- Advertisement -

Professor Sandra Wachter of the Oxford Internet Institute has identified features that contribute to AI discrimination. Therefore, she argues, existing laws need to be changed urgently to protect users who are victims of such technological biases.

Automatically generates artificial intelligencegroup discrimination

Wachter explains that artificial intelligence creates new digital groups in society – algorithmic groups – whose members are at risk of discrimination.

- Advertisement -

Bringing users together in an unintuitive and unconventional way, AI makes decisions that directly affect people’s lives, but there are no laws protecting online algorithmic groups from discriminatory consequences.

The expert points out that these new forms of discrimination do not conform to traditional norms that are often considered prejudice.

Currently, laws around the world criminalize discrimination based on race, gender, religion and sexual orientation, for example.

Read more

Research shows CGI influencers created in CG can lead to “false activism”

But in the virtual world, AIs replace these categories with specific groups based on companies’ goals: dog owners, video game gamers, Safari browser users, or those who “scroll” across computer and smartphone screens.

These algorithmic groups are used by artificial intelligence to make decisions when making car insurance, debt negotiation or loans.

However, within this “digital clipping”, many users are automatically discriminated against and automatically kicked out by technology without knowing or resorting to why they are not getting a particular service.

Employment, credits: How does AI discriminate against algorithmic groups?

In the study, the author explains that AI-related discrimination can occur in very common daily activities, without users being aware, such as when applying for job postings.

“Using a certain type of web browser, such as Internet Explorer or Safari, may cause the job seeker to fail in the online application.

In virtual interviews, candidates can be evaluated with facial recognition software that tracks facial expressions, eye movements, breathing or sweating.”

In addition to jobs, other possible algorithmic discrimination scenarios are, for example, applying for a financial loan.

The article highlights that an applicant is more likely to be rejected if they use only lowercase letters when filling out their digital application, or if they scroll through the application pages too fast to indicate that they have not read the requirements carefully.

Wachter explains that decisions made by AI programs can increasingly hinder equal and equitable access to essential goods and services, such as education, healthcare, housing or employment. Therefore, changes in the law are necessary:

“AI systems are now widely used to profile people and make important decisions that affect their lives.

The traditional norms and ideas that define discrimination in law are no longer sufficient in the case of AI, and I call for changes to bring AI into the law.”

Read more

Website uses artificial intelligence to show press freedom in America in real time

artificial intelligence, journalism, freedom of the press, Interamerican Press Society, phone, computer, smartphone

Five features used by AI to discriminate

Professor Sandra Wachter presented her new theory of “artificial invariance”, a concept used by artificial intelligence, where she identifies five features that reinforce discrimination against users.

The concept differs from traditional immutability in that it “does not originate from birth conditions, genetics or similar ‘natural’ sources”.

“In particular, AI profiling systems create artificially unalterable features that in practice cannot be controlled by the individual for various reasons.”

These are: opacity, ambiguity, instability, involuntary and invisibility, and lack of social concept.

Opacity is defined by a lack of knowledge about core AI profiles and decision-making processes. “Total opacity makes it impossible for individuals to exercise personal autonomy and prepare for a successful outcome,” he notes.

Uncertainty, like opacity, is characterized by “ambiguity that describes a state of profiles or decision-making processes in which the subject receives insufficient information to make informed choices”.

The teacher defines the instability feature for many AI systems; this means that they change over time or produce erroneous or unpredictable behavior.

Read more

Understand ‘astroturfing’, Russia’s weapon for spreading fake news about Ukraine on Twitter

astroturf fake news Russia Twitter

The ability to be involuntary and invisible is observed in AI profiling and decision-making systems based on “involuntary and invisible” digital and physiological behaviors that are not obviously important. He gives an example:

“Digital behaviors such as search queries, browsing history or click patterns can be used to group users into ad audiences.”

Finally, the last feature identified is the lack of social concept. This happens, for example, with the organization of users by images or click patterns, which do not have a relevant social concept for AIs.

“Groups and the logic of grouping them make sense to an algorithm, but not to a human,” the author explains.

“Characteristics that lack a social concept are essentially immutable because individuals have no decision criteria to change, influence, understand, give meaning to or use them.”

The theory proposes that algorithmic groups be considered on the basis of these immutable criteria to formulate anti-discrimination laws:

“In order to rationally streamline AI decision-making, it is necessary to rethink the resulting harms.

It is crucial to empower people to gain autonomy and control over their lives amid automated decision processes that can be confusing, seemingly arbitrary, and ultimately frustrating.

Artificial immutability can serve as a basis for future reform and judicial interpretation of anti-discrimination law to encompass the unprecedented harm caused by AI.”

Read full article in English here connection.

read it too

source: Noticias
[author_name]

- Advertisement -

Related Posts