Farangland News

2021

 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   Latest 

 

Robocop...

Singapore launches dystopian robots to police bad behavior


Link Here9th September 2021
Singapore is testing the use of patrol robots as the new addition to its mass surveillance infrastructure and the pair of machines, named "Xavier," will have the task of making sure the country's residents behave themselves in public spaces.

Singapore's Home Team Science and Technology Agency announced a 3 week trial of the technology before it is handed over to the local police. The intent is to use the robots as supplementary workforce to help out Singapore's public officers.

The "Xaviers" are fitted with cameras and networked with a command and control center, and report back on people's "bad behavior" in real time.

And what qualifies for bad social behavior in Singapore right now is falling afoul of COVID restrictions, but also things like parking your bike where you're not supposed to, or smoking in public areas.

There is also a re-educational element, as the robots will show messages instructing humans on what the expected, "proper" social behavior should be.

 

 

Compliance apps...

South Australia chooses to enforce home covid quarantine with an app using facial recognition and geolocation


Link Here4th September 2021
Australia has got itself in a bit of a mess over covid. Australians enjoyed a relatively covid free life via strict border controls with a few local lockdowns to deal with leaks. Now the delta variant has leapt over these hurdles leaving slowly vaccinating Australians with a big problem.

The Australian Government has chosen to adopt some very harsh control measures that wouldn't be out of place in China. Reports suggest that the state of South Australia has gone full Mao.

Geolocation and facial recognition of citizens are both used as a way this state wants to make sure everyone is compliant with its policies. It has introduced an app called Home Quarantine SA , and has ordered all SA residents to download it.

The app ensures citizens comply with quarantine orders by contacting people at random and asking them to provide proof of their location within 15 minutes. Citizens then share their location with the government or provide 'live face check-ins' to confirm they are at their 'registered quarantine address', media reports described the way the app is presented in app stores.

Enforcement seems to be based on national policy whereby transgressors can be sent to Covid camps (quarantine hotels) or fined up to a 1,000 Australian Dollars.

 

 

Dangerous Thailand...

Travel from Thailand to the UK is banned for Thais and requires 10 days of hotel quarantine for Brits.


Link Here27th August 2021
The UK has banned Thai people from travelling to the UK over concerns about the covid situation in Thailand. British and Irish travellers are allowed to travel from Thailand to the UK but they must go into very expensive hotel quarantine for 10 days.

The rules kick in for all arrivals from Monday 30th August.

It is probably doubly bad news for Brits eyeing the possibility of travel to Thailand, as Britain has generally imposed a red listing when countries have cases of a variant of concern. Thailand said earlier that it has detected four new variants, but that these are nothing to worry about. Perhaps Britain thinks otherwise.

 

 

A date with ID verification...

Dating app Tinder confirms plans to introduce voluntary ID verification


Link Here18th August 2021
Dating app Tinder has confirmed plans to introduce ID verification for its users around the world.

The firm said it would begin by making the process voluntary, except where mandated by law, and would take into account expert recommendations and what documents are most appropriate in each country.

It could come into force by the end of the year.

Critics of the idea have argued it could leave whistleblowers exposed, particularly in authoritarian states, and restrict access to online services in countries where ID documents are not commonplace among the entire population.

Tracey Breeden, vice president of safety and social advocacy at Tinder's parent firm, Match Group, said feedback from experts and users would be a vital part of its approach in helping ease such fears:

We know that in many parts of the world and within traditionally marginalised communities, people might have compelling reasons that they can't or don't want to share their real-world identity with an online platform, she said.

Creating a truly equitable solution for ID verification is a challenging but critical safety project and we are looking to our communities as well as experts to help inform our approach.

 

 

Updated: Danger! Poisoned Apple...

Apple will add software to scan all your images, nominally for child abuse, but no doubt governments will soon be adding politically incorrect memes to the list


Link Here14th August 2021
   Apple intends to install software, initially on American iPhones, to scan for child abuse imagery, raising alarm among security researchers who warn that it will open the door to surveillance of millions of people’s personal devices.

The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US.

According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a safety voucher saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.

The scheme seems to be a nasty compromise with governments to allow Apple to offer encrypted communication whilst allowing state security to see what some people may be hiding.

Alec Muffett, a security researcher and privacy campaigner who formerly worked at Facebook and Deliveroo, said Apple's move was tectonic and a huge and regressive step for individual privacy. Apple are walking back privacy to enable 1984, he said.

Ross Anderson, professor of security engineering at the University of Cambridge, said:

It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops.

Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple's precedent could also increase pressure on other tech companies to use similar techniques.

And given that the system is based on mapping images to a hash code and then comparing that has code with those from known child porn images, then surely there is a chance of a false positive when an innocent image just happens to the map to the same hash code as an illegal image. That could surely have devastating consequences with police banging on doors at dawn accompanied by the 'there's no smoke without fire' presumption of guilt that exists around the scourge of child porn. An unlucky hash may then lead to a trashed life.

Apple's official blog post inevitably frames the new snooping capability as if it was targeted only at child porn but it is clear that the capability can be extended way beyond this narrow definition. The blog post states:

Child Sexual Abuse Material (CSAM) detection

To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user's account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

Expanding guidance in Siri and Search

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.

Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.

These updates to Siri and Search are coming later this year in an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*

Update: Apples photo scanning and snooping 'misunderstood'

13th August 2021. See article from cnet.com

Apple plans to scan some photos on iPhones, iPads and Mac computers for images depicting child abuse. The move has upset privacy advocates and security researchers, who worry that the company's newest technology could be twisted into a tool for surveillance and political censorship. Apple says those concerns are misplaced and based on a misunderstanding of the technology it's developed.

In an interview published Friday by The Wall Street Journal, Apple's software head, Craig Federighi, attributed much of people's concerns to the company's poorly handled announcements of its plans. Apple won't be scanning all photos on a phone, for example, only those connected to its iCloud Photo Library syncing system.

It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood, Federighi said in his interview. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.

 

 

Update: Apple offers slight improvements

14th August 2021. See article from theverge.com

The idea that Apple would be snooping on your device to detect child porn and nude mages hasn't gone down well with users and privacy campaigners. The bad publicity has prompted the company to offer an olive branch.

To address the possibility for countries to expand the scope of flagged images to be detected for their own surveillance purposes, Apple says it will only detect images that exist in at least 2 country's lists. Apple says it won't rely on a single government-affiliated database -- like that of the US-based National Center for Missing and Exploited Children, or NCMEC -- to identify CSAM. Instead, it will only match pictures from at least two groups with different national affiliations. The goal is that no single government could have the power to secretly insert unrelated content for censorship purposes, since it wouldn't match hashes in any other database.

Apple has also said that it would 'resist' requests from countries to expand the definition of images of interest. However this is a worthless reassurance when all it would take is a court order for Apple to be forced into complying with any requests that the authorities make.

Apple has also states the tolerances that will be applied to prevent false positives. It is alarming that innocent images can in fact generate a hash code that matches a child porn image. And to try and prevent innocent people from being locked up, Apple will now require 30 images to nave hashes matching illegal images before the images get investigated by Apple staff. Previously Apple had declined to comment on what the tolerance value will be.

 

 

Dangerous Thailand...

US public health agency recommends against travel to Thailand


Link Here11th August 2021
The US Centers for Disease Control and Prevention (CDC) has warned against travel to Israel, France, Thailand, Iceland and several other countries because of a rising number of COVID-19 cases in those nations.

The CDC has been adding to its highest Level 4: Very High COVID-19 level as cases spread around the globe.

Saying that the numbers of cases in the USA has shot up recently and now dwarfs all of those dangerous places mentioned above.

 

 

 

Who remembers those comic book X-ray specs adverts?...

UK MP Maria Miller wants to ban an app that claims it can work out the nude body that hides behind clothed photos


Link Here3rd August 2021
MP Maria Miller wants a parliamentary debate on whether digitally generated but imaginary nude images need to be banned.

It comes as another service which allows users to guess what people in photos look like undressed.

The DeepSukebe's nudifier website had more than five million visits in June, according to one analyst. Celebrities, including an Olympic athlete, are among those who users claim to have nudified.

DeepSukebe's website claims it can reveal the truth hidden under clothes. According to its Twitter page, it is an AI-leveraged nudifier whose mission is to make all men's dreams come true. And in a blog post, the developers say that they are working on a more powerful version of the tool.

Miller told the BBC it was time to consider a ban of such tools:

Parliament needs to have the opportunity to debate whether nude and sexually explicit images generated digitally without consent should be outlawed, and I believe if this were to happen the law would change.

If software providers develop this technology, they are complicit in a very serious crime and should be required to design their products to stop this happening.

She said that it should be an offence to distribute sexual images online without consent to reflect the severity of the impact on people's lives. Miller wants the issue to be included in the forthcoming Online Safety Bill.


 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   Latest 

old Walking Street sign
 
Top

Home

Index

Links
 
GoGos

Bars

Nightlife Latest
 
News

Nightlife

Diary

Email
 

 


 

Thai News

Pattaya News

Thai Life
 

Farangland News

Adult World News

Sex Aware