Turkey’s presidential race will go to a runoff, but control over Turkey’s internet clearly belongs to Erdogan. This past weekend, as Turks prepared to cast their ballots, the government appealed to Twitter to censor several hundred accounts that weren’t to its liking. Those with the biggest followings on the list belong to vocal critics of Erdogan and the ruling Justice and Development Party and to journalists like Ore Trustwho reports and opines on Turkish politics from exile. Travis Brown is maintaining a list of restricted accounts on GitHub.
Just like it did in India in March, Twitter complied with these requests and suspended a raft of accounts within Turkey, without missing a beat. Elon deflected critics by arguing that Twitter would have been shut down in Turkey if the company hadn’t complied. I guess he didn’t have time to think about alternatives. Yaman Akdeniz, a veteran tech and law expert from Turkey who I spoke with for this newsletter a few weeks back, tweeted that “companies like Twitter should resist the removal orders, legally challenge them and fight back strategically against any pressure from the Turkish authorities.” Indeed, prior to Musk, Twitter was not afraid to challenge these kinds of demands. But these are different times. I shudder to think what it portends for future elections everywhere.
Former Human Rights Watch head Kenneth Roth summed it up well: “Elon Musk just gave away the store,” he tweeted. “By making clear that he prioritizes Twitter’s presence in a country over the platform’s free-speech principles, he has invited endless censorship demands.” Indeed, if other states see Twitter honoring these kinds of requests, what will stop them from pursuing the same tactics?
People are back online in Pakistan, but the country remains on edge following last week’s arrest of former Prime Minister Imran Khan, which triggered nationwide protests and street violence. In what they said was an effort to restore public order, authorities imposed a wave of network and social media shutdowns. But the chaos continued, and the shutdowns left people unable to communicate or follow the news. Pakistani digital rights expert Hija Kamran told Coda this week that “there is no evidence we can point to anywhere in the world that shows that shutdowns help to restore security.” She’s right. Researcher Jan Rydzak has even shown evidence that shutdowns tend to correlate with — and can even exacerbate — outbursts of violence and social unrest. They’re also really bad for the economy. Total cost estimates for this recent wave of shutdowns varybut they are on the order of millions of dollars per day.
Want asylum in the U.S? There’s an app for that. Unless you’ve already tried and failed to get asylum in another country, U.S. Customs and Border Protection offers a mobile app, CBP One, that is now the only way you can sign up for an appointment and expect to have your case heard and actually considered. The Biden administration cemented these guidelines after last week’s expiration of Title 42, the Trump-era rule that strictly limited asylum applications as a response to the pandemic. But, of course, people from all over the world continue to flee dire circumstances that endanger their lives and seek asylum in the U.S. The idea that your safety might literally depend on a mobile app is unnerving — and Amnesty International says it violates international human rights law. Even worse, dozens of people who have tried to use the app say it routinely malfunctions. Stay tuned for a big piece we have coming up on this next month from Erica Hellerstein.
CHATGPT BILLIONAIRE DAZZLES AND DINES WITH US LAWMAKERS
So far, 2023 has been a big year for regulating — or thinking about regulating — AI. Last week in the EU, legislators finally nailed down key elements of the bloc’s AI Act. And China’s Cyberspace Administration released a draft regulation last month for managing generative AI, no doubt expedited by global excitement around ChatGPT. Chinese industry is already very much in the AI game, but under China’s political system, companies know better than to speed into oblivion without minding the rules of the road.
And what of the U.S.? It’s the dominant player in much of the global tech industry. But one big reason that it dominates is that, by and large, we don’t regulate.
Yes, the Biden administration has put out a “blueprint” for an AI bill of rights, and we’ve heard months of discussion about how policymakers could, maybe, sort of, think about regulating AI. But past experience with Silicon Valley companies suggests the free-for-all shall continue. And so did a hearing this week at the U.S. Senate Judiciary subcommittee.
The hearing featured testimony from OpenAI CEO Sam Altman of ChatGPT fame, alongside IBM executive Christina Montgomery and NYU computer science professor Gary Marcus. Lawmakers focused on Altman and asked serious questions that the 38-year-old billionaire — and Stanford drop-out — answered with what seemed like pleasure. It probably helped that he’d dined with several of them the night before and evidently dazzled them with some product demos. Representative Anna Eshoo, who chairs the Congressional AI Caucus and has backed serious privacy protection bills in recent years, told CNBC that it was “wonderful to have a thoughtful conversation” with Altman. Yikes.
It was in stark contrast to other recent tech hearings where CEOs have been pummeled by legislators furious about companies exploiting people’s data, profiting off disinformation and promoting hate speech that leads to real-world violence. They seem not to realize that the issues that rightly angered them when they last grilled Meta’s Zuckerberg and Google’s Pichai, alongside a host of other problems more specific to generative AI, are totally on the table here.
Altman said over and over that he thinks regulation is necessary — Mark Zuckerberg has often said the same — and even suggested some policy moves, like establishing a special agency that would oversee and give licenses to companies building large language models. Although his smooth talk may have given the impression that he came up with these ideas himself, experts who don’t stand to profit from the technology have pushed for much more nuanced versions of what he talked about for years.
Perhaps it is more valuable to consider what Altman didn’t say — he made no mention of the fact that companies like his depend on the ability to endlessly scrape data from the web, in order to train and “smarten” their technologies. Where does all that data come from? You! Literally, we’re all putting information into the internet all the time, and in the U.S., there are no laws protecting that data from being used or abused, whether by private companies, political parties or anyone else.
I talked about it with my old colleague Nathalie Marshalwho now co-leads the Center for Democracy & Technology’s Privacy and Data Project. “Trying to regulate AI without a federal comprehensive privacy and data protection law seems like a fool’s errand,” Marechal told me. “We need a data privacy law. From there, we can build on that by regulating specific ways of developing AI tools, specific applications. But without rules on how you can collect data, how you can use it, how you can transfer it, anything else to me seems like you’re skipping a step.”
She also described Altman’s moves in D.C. as a “charm offensive” and suggested that by promoting regulation at this stage, companies like OpenAI are better positioned to push some of the blame to Washington when something bad happens involving their products.
Will the U.S. ever meaningfully regulate tech? I really don’t know. But we definitely will get to see what happens when you let the people making the most money off the industry set the agenda.
WHAT WE’RE READING
- The harms coming from AI are already clear and present, especially for people using social services or living in public housing. The Washington Post has a new investigation on the use of video surveillance and facial recognition tech in public housing developments across the U.S. Don’t miss it.
- In a commentary piece for Jurist, Sudanese social media researcher Mohamed Suliman writes that big tech companies in the U.S. have emboldened Sudan’s RSF militia to “spread propaganda and steer public opinion in a bid to normalize their actions and conceal their crimes.” Suliman has been making this argument for years — I’m glad it’s finally getting some attention.
From biometrics to surveillance — when people in power abuse technology, the rest of us suffer
#Tech #billionaires #regulate #wrong