May 28, 2022 – Over the years, we’ve talked a lot about the fact that there is no legal framework in place to protect personal privacy in the private sector on a national basis. The issue has really been the fact that technology is moving much faster than privacy regulations. The problem is exacerbated by the fact that many of our policymakers in congress and state governments are not technology savvy. And in many cases, their campaigns are funded by the very technology companies that they should be regulating. With the advent of AI, the issue is only going to get worse. And there isn’t much time for the laws to catch up because AI is already here and it is being used in the private sector and by law enforcement agencies alike. Here is just a sampling of what the general public is facing; some of it good and some of it intrusive.
|
|
|
|
|
 |
On the good side, there is a company by the name of Actuate. The company is using AI to network already-installed security cameras. Their goal isn’t to identify people. It is to find people who are carrying weapons in places they shouldn’t be; specifically guns.
Actuate got started after the 2017 Las Vegas shooting, and is primarily a software provider. They don’t use facial recognition and the aren’t establishing their own facial recognition database. The sole focus is to prevent mass casualty events; something that most people can agree needs to be a focus of law enforcement.
Actuate isn’t the only company in this market. Other companies like Evolv and Omnilert are also focused on what is fast becoming known as “threat detection.”
As good as their intentions are, these companies need to be regulated. While most people would agree that a perfect stranger bringing a gun into a church service or school is problematic, searching for people carrying guns in areas where it is perfectly legal could be a problem. It could allow the government to build a database of owners; something that is illegal. And guns are only one example.
On the other side of the AI issue are companies that have very negative implications for privacy. There is now a picture search engine named PicEyes. Just agree to pay $29.99 a month and you can upload as many pictures of people as you like for the sole purpose of identifying them.
Just imagine. Take a picture of a busy street. There is someone standing at a corner, minding their own business and waiting to cross when there is a break in traffic. If the quality of the photo is good enough, you can use PicEyes to identify them.
Of course PicEyes does require subscribers to sign an agreement that they will only upload pictures of themself or of people who have given permission. But there is no way to enforce those requirements. Furthermore, why would anyone need a monthly subscription to upload pictures of themselves? The premise is laughable. On the other hand if you were a stalker, you could certainly find a use for this service. Just my opinion, of course. Just imagine what this technology could do if you uploaded a picture of someone who entered the federal witness protection program a few years ago.
Whether or not the companies releasing services based on AI have good intentions today, there is absolutely no way to know how this technology will be used in the future. If history is any indication through, these technology advances will inevitably be turned against the average citizen. The only way to prevent or even limit that is put in place some regulations that limit the use of AI for surveillance purposes. Unfortunately, there doesn’t seem to be any focus in congress or state legislators to move that type of legislation forward.
by Jim Malmberg
Note: When posting a comment, please sign-in first if you want a response. If you are not registered, click here. Registration is easy and free.
|