Home › Forum Online Discussion › General › America Isn’t Far Off From China’s ‘Social Credit Score’ (article)
- This topic has 1 reply, 1 voice, and was last updated 6 years, 6 months ago by c_howdy.
-
AuthorPosts
-
March 27, 2018 at 1:49 pm #51828c_howdyParticipant
By Anthony Davenport • 02/19/18 6:30am
It’s hardly news that Big Brother is watching you if you happen to live in China. After all, China is a communist country where the government exercises wide control over everything from the economy to the media. But it’s still scary to read about the specific ways Chinese citizens are being watched—and controlled—in Mara Hvistendahl’s terrific Wired story.
In China, mobile payments have surged to $9 trillion a year. A few secretive organizations are using the data from those transactions—along with information pulled from government, legal and social sources—to create a “social credit score.”
In China, your social credit score determines everything from how much interest you pay on a loan to how easily you can rent a car or get a visa to travel overseas. Your score gets hurt for many of the same reasons you would have a bad “traditional” credit score in the United States—such as paying bills late or defaulting on a loan—but your score will also get damaged by consorting with the wrong friends on social media or being associated in real life with people who have been convicted of financial crimes. It isn’t hard to see that being critical of the government in China would make it hard to participate in day-to-day life—at least financially. A bad score could keep you from getting a job, getting access to health care or finding an acceptable place to live.
That couldn’t possibly happen in the land of the free, could it?
Actually, it’s already happening.
We don’t have a few all-powerful monoliths like the federal government (or Alibaba) recording everything we do, but most Americans don’t realize how much of their everyday life is being tracked and analyzed. Right now, data collection companies’ main goal is selling you things. But it isn’t hard to envision how a few changes in the market could produce something like our Chinese friends’ Big Brother experience.
Think about the digital trail you leave in an average week. Your top-of-the-line smartphone has a GPS function that lets it be tracked anywhere you go. You might buy something with Amazon Prime and have it delivered to your home, browse a website promoted to you based on what you’ve liked on Facebook, use your bank’s online bill pay function to pay your cable bill, and drive your car across a bridge where the toll is automatically processed by EZPass.
Data aggregators are able to build a very sophisticated profile of how you spend and sell that profile to advertisers looking to find people just like you. The credit bureaus keep track of how you pay your bills—and manage your available credit—and sell it to companies that assign you a score that lenders then use to determine how reliable you’ll be about paying off a new line of credit.
We’re also not far away from being completely conditioned—and accepting—of “social” grading by our peers. That Uber app that makes it so easy to get home after a party? You’re ranking the driver on his or her performance, and he or she is doing the same to you. Social platforms like Twitter, Instagram and Facebook have made it easier than ever to praise—and criticize—friends and acquaintances for everything from their fashion taste to their political views and how they raise their children. You can even do it anonymously.
Amazon has been working very hard to integrate itself completely into your everyday life. It’ll sell you anything you want—including a device that sits on your counter and waits for you to ask it something. It just bought Whole Foods to get into the grocery market. Is it hard to imagine Amazon entering the drug business? Or owning a cable company? Or a bank? Is it a stretch to think Amazon will give preferential treatment to Prime customers who use its brick-and-mortar stores or surf the web via an internet service provider they own?
The latest round of tax reforms is giving companies like Amazon, Apple, Google and Facebook even more cash to become even larger conglomerates. They’re all going to possess and consolidate more of your data. And it’s unlikely you’ll know what information they’re collecting—or how they’re going to use it.
Sure, you can opt out, but companies that use “big data” are betting that you will trade privacy for convenience—and a good deal on that sweater you didn’t know you wanted.
The choice is up to you, so you should make your choice informed. It might seem tedious, but read the fine print when you sign up for any service—whether it’s Amazon Prime, Google or Facebook. What are you giving access to and what can you opt out of? Be aware of the strings attached to free services. Credit Karma and Mint might offer some useful tools to estimate your credit score or keep track of your finances, but just because you aren’t paying for them doesn’t mean they’re free. You’re exchanging your personal data for that convenience. You’re giving them you.
Your financial life is the most tangible way you can be impacted by this kind of data collection, so it pays to be as knowledgeable as you can. When you know how your credit score is being computed, you can make better day-to-day decisions before a crisis hits. You can learn more about how to do that in my new book, Your Score. It’s a guide designed to put you on equal footing with your lenders.
Anthony Davenport is the founder and CEO of Regal Credit Management.
May 7, 2018 at 4:04 pm #52481c_howdyParticipantHow algorithms (secretly) run the world
February 11, 2017
https://phys.org/news/2017-02-algorithms-secretly-world.html
When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.
The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.
Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”
But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”
Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.
“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.
O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:
– Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
– A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
– Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
– Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
– In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.
Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”
The report noted that data systems can ideally help weed out human bias but warned against algorithms “systematically disadvantaging certain groups.”
Zeynep Tufekci, a University of North Carolina professor who studies technology and society, said automated decisions are often based on data collected about people, sometimes without their knowledge.
“These computational systems can infer all sorts of things about you from your digital crumbs,” Tufekci said in a recent TED lecture.
“They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy.”
Such insights may be useful in certain contexts—such as helping medical professionals diagnose postpartum depression—but unfair in others, she said.
Part of the problem, she said, stems from asking computers to answer questions that have no single right answer.
“They are subjective, open-ended and value-laden questions, asking who should the company hire, which update from which friend should you be shown, which convict is more likely to reoffend.”
Frank Pasquale, a University of Maryland law professor and author of “The Black Box Society: The Secret Algorithms That Control Money and Information,” shares the same concerns.
He suggests one way to remedy unfair effects may be to enforce existing laws on consumer protection or deceptive practices.
Pasquale points at the European Union’s data protection law, set from next year to create a “right of explanation” when consumers are impacted by an algorithmic decision, as a model that could be expanded.
This would “either force transparency or it will stop algorithms from being used in certain contexts,” he said.
Alethea Lange, a policy analyst at the Center for Democracy and Technology, said the EU plan “sounds good” but “is really burdensome” and risked proving unworkable in practice.
She believes education and discussion may be more important than enforcement in developing fairer algorithms.
Lange said her organization worked with Facebook, for example, to modify a much-criticized formula that allowed advertisers to use “ethnic affinity” in their targeting.
Others meanwhile caution that algorithms should not be made a scapegoat for societal ills.
“People get angry and they are looking for something to blame,” said Daniel Castro, vice president at the Information Technology and Innovation Foundation.
“We are concerned about bias, accountability and ethical decisions but those exist whether you are using algorithms or not.”
© 2017 AFP
-
AuthorPosts
- You must be logged in to reply to this topic.