Apple faces AirPods lawsuit after an Amber Alert allegedly caused hearing damage

A lawsuit has been filed against Apple alleging that a boy suffered hearing damage when using his AirPods Pro. A couple from Texas filed the suit, NBC News reports. According to the suit, their then 12-year-old son (referred to as “B.G.”) was using AirPods to watch something on his iPhone at a low volume when he received an Amber Alert.

The alert “went off suddenly, and without warning, at a volume that tore apart B.G.’s ear drum, damaged his cochlea and caused significant injuries,” the suit said. The boy’s parents say he suffered from dizziness, vertigo, nausea and tinnitus following the incident in 2020 and that he’ll need to wear a hearing aid for the rest of his life.

They claim AirPods don’t “automatically reduce, control, limit or increment notification or alert volumes to a safe level that causes them to emit” and that Apple doesn’t provide instructions to limit the volume of alerts to prevent hearing damage. The couple argues that Apple hasn’t fixed the problem and if it wasn’t aware of the issue, it should have known.

Other Apple users have complained about AirPod volume spikes on the company’s support website. Engadget has contacted Apple for comment.

Tech industry files emergency application to block controversial Texas social media law

Trade industry groups representing tech giants, such as Google and Facebook, have filed an emergency application with the Supreme Court to block HB 20. That’s the controversial law Texan law that bars social media websites from removing or restricting content based on “the viewpoint of the user or another person.” It also allows users to sue large platforms with more than than 50 million active monthly users if they believe they were banned for their political views. As The Washington Post reports, it reflects Republicans’ claims that they’re being being censored by “Big Tech.”

A federal judge blocked HB 20 from being implemented last year, but the 5th US Circuit Court of Appeals overturned that decision recently. The panel of judges agreed with the state of Texas that social networks are “modern-day public squares,” which means they’re banned from censoring certain viewpoints. One of the judges also said that social networks aren’t websites but “internet providers” instead. The panel allowed the law to take effect while its merits are still being litigated in lower court. 

NetChoice and the Computer and Communications Industry Association (CCIA), the groups representing the tech industry, have maintained that the law is an attack on the First Amendment and have previously questioned its constitutionality. In their emergency application, they said HB 20 is an “unprecedented assault on the editorial discretion of private websites… that would fundamentally transform their business models and services.” 

They explained that under the law, platforms would have no choice but to allow the dissemination of “all sorts of objectionable viewpoints,” such as Russian propaganda justifying the invasion of Ukraine, posts supporting neo-Nazis, KKKs and Holocaust deniers, as well as posts encouraging dangerous behavior, such as disordered eating. “The Fifth Circuit has yet to offer any explanation why the District Court’s thorough opinion was wrong,” they wrote in their application (PDF).

NetChoice and CCIA also argue that by allowing the law to be enforced, it could influence and interfere with the decision of the 11th Circuit Court of Appeals. The Atlanta-based appeals court will decide the fate of a similar law in Florida that was initially blocked by a federal judge for violating Section 230 of the Communications Decency Act.

Oura sues smart ring rival Circular for allegedly copying technology

Even smart rings aren’t immune to patent wars. Wareablenotes Oura has sued fledgling rival Circular for allegedly violating patents covering both ring design and biometric data collection. Circular’s upcoming wearable allegedly copies Oura’s work by both stuffing electronics into a cavity and gathering info to generate an overall energy score.

Oura said it asked Circular to cease and desist in January, roughly a year after the newcomer started its crowdfunding campaign. Circular took on lawyers to review the patents in response.

Circular unsurprisingly objected to the lawsuit and characterized it as an attempt to stifle competition. In a statement, a spokesperson told Wareable that pursuing a monopoly has “never driven innovation.” Oura supposedly wants the smart ring market to itself, in other words.

It’s not certain which side will prevail. While the patents are broad, effectively covering many attempts to make smart rings, the US Patent Office did approve them. Circular may have to challenge the patents themselves to prevail in court, not just dispute their relevance to its particular finger-based technology.

Update 5/13/22 7:30pm ET: “At ŌURA, we embrace creativity and innovation in health technology, including from our competitors,” an Oura spokesperson told Engadget via email. “However, what we cannot accept is direct copying, as this does nothing to help consumers or advance our industry. The lawsuit filed against Circular addresses willful infringement of at least two ŌURA patents.”

Texas law that allows users to sue social networks for censorship is now in effect

The 5th US Circuit Court of Appeals has put a controversial Texan law that allows users to sue social media companies back into effect. As Houston Public Media notes, Texas introduced HB 20 last year after high-profile conservatives, including Donald Trump, were blocked on social media websites. A federal judge put HB 20 under temporary injunction in December, but that injunction has now been paused

Under the law, users will be able to sue large social media platforms with more than 50 million active monthly users such as Facebook and Twitter if they believe they were banned for their political views. HB 20 also prohibits social networks from removing or restricting content based on “the viewpoint of the user or another person.” 

Trade industry groups NetChoice and the Computer and Communications Industry Association (CCIA) managed to secure an injunction against the law last year. They argued that HB 20 would lead to the spread of misinformation and hate speech on social networks and that it also violates the websites’ First Amendment rights. The federal judge overseeing the case agreed that social networks have the right to moderate content under the First Amendment and also said that parts of the law are “prohibitively vague.”

In a hearing for the appeal filed by Texas, the state’s lawyers argued that social media platforms are “modern-day public squares.” That means they can be required to host content that they deem objectionable and are banned from censoring certain viewpoints. The 5th Circuit judges sided with Texas, with one even telling the trade groups during the hearing that social networks like Twitter are not websites but “internet providers” instead.

NetChoice counsel Chris Marchese called HB 20 “an assault on the First Amendment” and “constitutionally rotten from top to bottom” on Twitter. The trade groups plan to appeal immediately, but for now, HB 20 is fully in effect. 

A federal court blocked a similar law in Florida last year after the judge ruled that it violates Section 230 of the Communications Decency Act that shields online platforms from liability for what their users’ post. Florida also appealed that decision, which will be decided by the 11th Circuit Court of Appeals.

Facebook faces lawsuit in Kenya over poor working conditions for moderators

Meta, Facebook’s parent company, is facing another lawsuit filed by one of is former content moderators. According to The Washington Post, this one is filed by Daniel Motaung, who’s accusing the company and San Francisco subcontractor Sama of human trafficking Africans to work in exploitative and unsafe working conditions in Kenya. The lawsuit alleges that Sama targets poor people across the region, including those from Kenya, South Africa, Ethiopia, Somalia and Uganda, with misleading job ads. They were reportedly never told that they’d be working as Facebook moderators and would have to view disturbing content as part of the job. 

Motaung said the first video he watched was of someone being beheaded and that he was fired after six months on the job for trying to spearhead workers’ unionization efforts. A Time report looking into the working conditions of the office where Motaung worked revealed that several employees suffered from mental trauma due to to their jobs. Sama, which positions itself as an “ethical AI” company providing “dignified digital work” to people in places like Nairobi, has on-site counselors. Workers generally distrusted the counselors, though, and Sama reportedly rejected counselors’ advice to let workers take wellness breaks throughout the day anyway. 

As for Motaung, he said in the lawsuit that his job was traumatizing and that he now has a fear of death. “I had potential. When I went to Kenya, I went to Kenya because I wanted to change my life. I wanted to change the life of my family. I came out a different person, a person who has been destroyed,” he noted. The lawsuit also mentioned how Motaung was made to sign a non-disclosure agreement and how he was paid less than promised — 40,000 Kenyan shillings or around $350. The report by Time said employees left in droves due to the poor pay and working conditions. 

Harrowing stories of Facebook moderators having to watch traumatizing videos and working in poor conditions aren’t new and come from all over the world, including the US. In fact, the company agreed to pay its US content moderators part of a class action lawsuit $52 million back in 2020. Those who were diagnosed with psychological conditions related to their work got a payout of up to $50,000.

Meta’s Nairobi office told The Post that it requires its “partners to provide industry-leading pay, benefits and support.” It added: “We also encourage content reviewers to raise issues when they become aware of them and regularly conduct independent audits to ensure our partners are meeting the high standards we expect of them.”

New York AG’s lawsuit again Amazon dismissed by appeals court

Amazon has one less legal challenge to worry about. An appeals court today dismissed a lawsuit by New York State Attorney General Letitia James against the company for its coronavirus safety protocols and alleged retaliation against workers, reportedReuters. In its ruling, the court said that since federal labor law preempts state labor law, National Labor Relations Board “should serve as the forum” for the dispute. It also pointed to a separate NLRB case over fired employee Gerald Bryson and said it contained “essentially the same” allegations of retaliation, and argued there was a risk of “interference” over the NLRB’s jurisdiction.

The lawsuit — filed last year — accused Amazon of subjecting workers from two Staten Island facilities to unsafe conditions during the pandemic. It also alleged that Amazon retaliated against former employees Christian Smalls and Derrick Palmer — now of the Amazon Labor Union — by firing them after they protested the company’s working conditions. Just a few days earlier, Amazon filed its own lawsuit against the New York State attorney general’s office in an effort to stop the investigation.

Last month, it appeared that luck was on the NY State attorney general’s side when a federal judge denied Amazon’s bid to transfer the lawsuit. But the New York Court of Appeals today not only reversed this decision, it dismissed claims in the state attorney general’s lawsuit that Amazon violated COVID-19 health and safety protocols. The appeals court stated that since New York State’s coronavirus workplace protocols have since been lifted, the lawsuit’s efforts to get Amazon to comply with them were “moot.”

“Throughout the pandemic, Amazon has failed to provide a safe working environment for New Yorkers, putting their health and safety at risk. As our office reviews the decision and our options moving forward, Attorney General James remains committed to protecting Amazon workers, and all workers, from unfair treatment,” wrote Morgan Rubin, a spokesperson for the attorney general, in a statement to Engadget.

Engadget has reached out to Amazon for comment on the lawsuit and will update if we hear back. 

Tinder owner Match Group sues Google alleging antitrust violations

The parent company of Tinder and Hinge has sued Google. In a complaint (PDF link) filed Monday with a federal court in California, Match Group alleges the tech giant broke federal and state antitrust laws with its Play Store guidelines.

The lawsuit concerns a policy Google plans to implement later this year. In the fall of 2020, the company “clarified” its stance on in-app purchases, announcing it would eventually require all Android developers to process payments involving “digital goods and services” through the Play Store billing system. Google initially said it would begin enforcing the policy on September 30th, 2021, but later extended the deadline to June 1st, 2022.

Match alleges Google had “previously assured” the company it could use its own payments systems. The company claims Google has threatened to remove its apps from the Play Store if it does not comply with the upcoming policy change by the June 1st deadline. Match further claims Google has preemptively started rejecting app updates that maintain the existing payment systems found in its dating services. “Ten years ago, Match Group was Google’s partner. We are now its hostage,” the company says in its complaint.

“This lawsuit is a measure of last resort,” Match CEO Shar Dubey said in a statement the company shared with Engadget. “We tried, in good faith, to resolve these concerns with Google, but their insistence and threats to remove our brands’ apps from the Google Play Store by June 1st has left us no choice but to take legal action.”

In a statement Google shared with Engadget, the search giant said Match is eligible to pay a 15 percent commission on in-app purchases, a rate the company noted is the lowest among “major app platforms.” Google also pointed out that the “openness” of Android allows Match to distribute its apps through alternative app stores and sideloading if the company “doesn’t want to comply” with its policies. “This is just a continuation of Match Group’s self-interested campaign to avoid paying for the significant value they receive from the mobile platforms they’ve built their business on,” a Google spokesperson told Engadget.

The lawsuit comes at a time when both Apple and Google face significant regulatory pressure from lawmakers around the world to change their app store policies. In February, the Senate Judiciary Committee advanced the Open App Markets Act. Should the legislation become law as it stands, it would prevent both companies from locking third-party developers into their respective payment systems. At the same time, Match hasn’t been free of scrutiny either. The company recently said it would stop charging older users more for its dating app subscriptions after a report from Mozilla and Consumers International found Match charged those individuals “substantially more.” 

In March, Google announced it was partnering with Spotify to test third-party billing systems. Notably, Match says that pilot offers “nothing new for developers or users.” The company also said Google rejected its request to be included in the program and would not share the criteria for inclusion.

Update 05/10/22 8:53AM ET: In a new blog responding to Match’s allegations, Google calls the company’s complaint “cynical,” and accuses Match of “attempting to freeload off our investments rather than being a responsible partner.” And in addition to highlighting many of the same points Google shared in its initial statement to Engadget, the blog post points to the fact the FTC sued Match in 2019 for using fake ads to trick consumers into paying for subscriptions.

Clearview AI agrees to limit sales of facial recognition data in the US

Notorious facial recognition company Clearview AI has agreed to permanently halt sales of its massive biometric database to all private companies and individuals in the United States as part of a legal settlement with the American Civil Liberties Union, per court records.

Monday’s announcement marks the close of a two-year legal dispute brought by the ACLU and privacy advocate groups in May of 2020 against the company over allegations that it had violated BIPA, the 2008 Illinois Biometric Information Privacy Act. This act requires companies to obtain permission before harvesting a person’s biometric information — fingerprints, gait metrics, iris scans and faceprints for example — and empowers users to sue the companies who do not. 

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” Rebecca Glenberg, staff attorney for the ACLU of Illinois, said in a statement. “BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables. Today’s agreement begins to ensure that Clearview complies with the law. This should be a strong signal to other state legislatures to adopt similar statutes.”

In addition to the nationwide private party sales ban, Clearview will not offer any of its services to Illinois local and state law enforcement agencies (as well as all private parties) for the next five years. “This means that within Illinois, Clearview cannot take advantage of BIPA’s exception for government contractors during that time,” the ACLU points out, though Federal agencies, state and local law enforcement departments outside of Illinois will be unaffected. 

That’s not all. Clearview must also end its free trial program for police officers, erect and maintain an opt-out page for Illinois residents, and spend $50,000 advertising it online. The settlement must still be approved by a federal judge before it takes effect.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project, said in Monday’s statement. “Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

Monday’s settlement is the latest in a long line of privacy lawsuits and regulatory actions against the company. Clearview AI was slapped with a €20 million fine by Italian regulators in March and £17 million in November by the UK, both for violations of national data privacy laws. Australia has been investigating the company’s scraping schemes since 2020 and, currently, a small group of US lawmakers are lobbying to ban Federal agencies from using Clearview’s services entirely. But given that the company boasted in February that it had amassed 100 billion images in its “index of faces,” the right to anonymity in America remains deeply in peril.

Federal judge dismisses Trump’s lawsuit against Twitter

San Francisco federal district court Judge James Donato has tossed the lawsuit Donald Trump filed against Twitter last year in a bid to get his account back. The social network permanently suspended the former president’s account after his supporters stormed the Capitol in January 2021. In the company’s announcement, Twitter cited two of his tweets in particular that it believes were “highly likely to encourage and inspire people to replicate the criminal acts that took place at the US Capitol” on January 6th last year.

Trump filed a lawsuit in October, seeking a preliminary injunction on the ban and arguing that it violates his First Amendment rights. Donato disagreed and noted in his ruling that Twitter is a private company. “The First Amendment applies only to governmental abridgements of speech,” he explained, “and not to alleged abridgements by private companies.” The judge also rejected the notion that the social network had acted as a government entity after being pressured by Trump’s opponents and had thereby violated the First Amendment when it banned the former President. 

In his lawsuit, Trump asked the judge to rule the federal Communications Decency Act, which states that online service providers such as Twitter can’t be held liable for content posted by users, as unconstitutional. The judge shot down that claim, as well, and ruled that the former President didn’t have legal standing to challenge Section 230 of CDA. Trump is a known critic of Section 230 and proposed to limit the protections social media platforms enjoy under it during his term.

The former President was an avid Twitter user before his suspension and formed his own social network called Truth Social after he was banned. Just recently, he told CNBC that he won’t be going back to Twitter even if Elon Musk reverses his suspension and will stay on Truth Social instead. According to a recent report by the Daily Beast, Truth Social has 513,000 daily active users compared to Twitter’s 217 million.