Creepy cyberstalking just got a whole lot easier
Imagine that you are sitting on a bus. A few seats ahead of you, you see a person you think you recognise. Perhaps it is an old classmate? Using your mobile phone camera, you take a photograph of her.
Using off-the-shelf face-recognition software combined with substantial processing power accessed cheaply through cloud computing, you search for publicly available images (such as on Facebook) matching the image you captured of the woman on the bus.
You find out that the person is indeed your old classmate, and you can also access information such as her address, phone number, birthday and perhaps income.
This scenario is not science fiction. It is reality.
Face-to-data (F2D) refers to at least partially automated processes for accessing personal information about a person based on an image of that person’s face.
While face recognition, and the legal issues associated with it, have gained attention for some time now, recent ground-breaking research by a team of researchers from Carnegie Mellon University highlight there is a new set of privacy concerns.
The advancements in face recognition technology, combined with the development of cloud computing and widespread posting of images linked to names on social media sites mean accessing a person’s private information has become a whole lot easier.
The Carnegie Mellon University team have run a series of experiments. Using a search tool, they have built a database of images and names collected from publicly available Facebook profiles.
They then captured images of consenting students and ran those images through the off-the-shelf face-recognition software, linking in the data gained from the Facebook profiles. In the test, about a third of the students were identified.
By mining public sources, including Facebook profiles and government databases, the researchers could identify at least one personal interest of each student and, in a few cases, the first five digits of a Social Security Number.
The scenario of confirming the identity of someone on a bus is of course only one possible way F2D technology could be used.
Other obvious uses include governmental use for surveillance purposes, and business use such as, data gathering (for example, financial position and address) relating to customers entering a shop.
If we change the scenario on the bus so that the person being photographed is only 11 years old and the photographer is a paedophile using F2D technology to identify, track and stalk the girl, the scenario is suddenly not appealing at all (if it ever was).
Like (almost) all technologies, F2D can be used both for good and for evil. This should not be ignored. We must at least consider regulating its use.
We must ask to what extent the Government’s current privacy reform will protect us against F2D – could it be that the “new” privacy law is outdated already before it has taken effect?
Do we need to go further? Perhaps with F2D we have finally reached a stage where the technology in question is so ‘creepy’ that it does not matter whether it is merely being used for ‘harmless’ curiosity, maybe it should quite simply be banned?
In the end, F2D may be best viewed as just another example of how technology is developed based on what technology can do. Maybe we now need to shift the focus to what the technologies we develop should do?
Comments on this post will close at 8pm AEST.
Read all about it
Up to the minute Twitter chatter
The latest and greatest
Good morning Punchers. After four years of excellent fun and great conversation, this is the final post…
I have had some close calls, one that involved what looked to me like an AK47 pointed my way, followed…
In a world in which there are still people who subscribe to the vile notion that certain victims of sexual…