17 February 2021

Dice Namer Constraint

A constraint is an artificial challenge during an exercise to make the exercise more interesting, challenging or fun. I like constraints and wrote about some of them. The Dice Namer is such a constraint: Everything but the names of test methods is named using random dices. The French company Arolla created some really nice dices using random, enterprise-y useless names like Processor, Dummy or Factory. I managed to get several sets and to use them in my coding exercises after discussing naming in code. Now with the remote work due to Covid, I had to come up with something new. And here it is, the

Arolla Dice Namer Application

Press the buttons and see the random dices for your name, together with some dices-like sound (if you allow your browser to play it). This is a real fun and it will help you create amazing code like this one using viciously named functions...
What does this code do?

1 February 2021

Working with AI Surveillance

Since several years I am exploring ethics and meaning of my work as software engineer. While it is not a clear cut topic, it lies in the core of our work, similar to all the tech and engineering stuff. Last year Artur Meyster, CTO of Career Karma, contacted me about cooperating on a guest blog post here on my site. (Thank you Artur for your patience.) While guest blogging seems mainly SEO related, I do not mind a fresh perspective on a topic I have been thinking about, much like the series of interviews I am conducting from time to time. This post was mostly written by Maria Elena Gonzalez. Maria is a broadcast journalist and has been working as a tech writer for three years. During this time, her work has been published by companies like TechAccute, Trip University, and Entrepreneur - and now by the Code Cop ;-)

Heavy SurveillanceAI Surveillance
The recent evolution of technology has resulted in many new areas, such as data science, virtual reality or machine learning. As a developer, these are the new and cool topics to explore and dive into. (Maybe start with a data science boot camp.) Applications in these areas have improved our quality of life. They serve the same purpose as AI surveillance: to improve our quality of life. However, as is the case with any invention, AI surveillance has a dark side: we might be losing our right to privacy.

You may think this is a problem only for famous people, but that is not the case. It can also be a problem for you. According to a recent report from the Carnegie Endowment for International Peace, 75 out of 176 countries are using Artificial Intelligence (AI) for surveillance purposes, which makes it hard to know who is safe from AI surveillance. If you would like to know more about the impact of AI surveillance on our daily life, read on.

What is AI Surveillance?
AI surveillance goes beyond AI-driven security cameras. Companies and governmental organizations have been using this technology to track trends and transitions to make better business decisions.

How is it Affecting Us?
AI surveillance has many benefits. For example, it can improve traffic. Have you ever waited for traffic lights to turn green even though there are no cars around? This could be improved by using AI surveillance. AI algorithms can detect movement and change traffic lights phasing based on real-time activity. Financial companies can use AI surveillance to spot malicious activities and minimize fraud. This has the potential to revolutionize the financial industry. The possibilities are endless, and we could continue to show you the positive aspects of AI surveillance, but let's take a look at the darker side.

With the amount of data collected increasing rapidly, the possibility of privacy invasion is rocketing. While some AI surveillance activity falls within the law, others represent privacy violations. Privacy International points out that AI surveillance can be very delicate when it comes to data privacy: "With the proliferation of surveillance cameras, facial recognition, open-source and social media intelligence, biometrics, and data emerging from smart cities, the police now have unprecedented access to massive amounts of data." Governments that take advantage of this technology have access to citizens' personal data, a situation that can quickly turn into a violation of privacy rights if left unchecked.

Which Countries are Adopting AI Surveillance Technology?
Among the leaders in the use of AI surveillance are China and the United States. Out of the two, China is implementing the technology more widely. According to the Carnegie Endowment for International Peace, the organizations that most often use AI surveillance are part of the government. China is not only using the technology to improve traffic, it is also applying AI surveillance to track the activities of the Uighurs, in the northwestern province of Xinjiang, according to LiveWithAI. By using facial recognition based on race and ethnicity characteristics, the Chinese government can easily detect when a Uighur attempts to flee Xinjiang.

Is the US Government using AI surveillance? Yes, of course. Common applications of governmental AI surveillance include military activity and security. For example Palantir might have helped to power Trump's "extreme vetting" of immigrants. However, there are other applications that target improving the performance of cities.

James, I think your cover's blown!Which Companies Provide These Kind of Service?
According to the report from the Carnegie Endowment for International Peace, some of the leading companies providing AI surveillance are Huawei, ZTE and Hikvision. These companies provide AI-powered surveillance to more than 60 countries. A particularly controversial case involving AI surveillance and facial recognition technology is that of Clearview. This company collects and stores publicly available data on every person on the planet. Scraping from social media platforms such as Facebook and Instagram, Clearview has collected more than 3 billion images from social media platforms, making it the largest such library in the world. Using its enormous library of pictures, Clearview has created a search engine for faces that can be used to recognize anyone. This tool has already been used by more than 600 law enforcement agencies, including the FBI and the US Department of Justice. Clearview's willingness to do what no one else dared to do - scrape freely available data and exploit it for profit - has put it in the centre of a moral debate.

Should You Apply at Such Companies?
Now we are getting to the meat of the discussion: As a software professional, should you work for a company that is working on this type of solution? How do you navigate the moral implications of your work? These are complex questions that seem to require deep introspection. However, at the end of the day - at least for the extreme cases mentioned above - the answer is simpler than you expect.

On the personal level, whether working for a company like Clearview is a good idea depends on your values and what you prioritize in life. Some people abhor the whole concept of using free data to spy on people, while others would simply see nothing wrong with this idea. If you are more like the former, then working for a company like Clearview is a bad idea. It goes against your principles - you will simply not be happy in the long run, regardless of the size of that pay check. If, on the other hand, you don't see anything wrong with these practices and everything else about the job looks good, you might think it is OK to do the job. It is not.

In his famous keynote Not just code monkeys Martin Fowler identified two areas where most of our impact and responsibility as a developer lie at the moment: Privacy and avoiding the creation of an alienating atmosphere at our workplaces.

Conclusion
AI surveillance can be beneficial if used wisely and without violating privacy rights. However, there are some organizations that are using this technology to invade our privacy. The key to avoiding this is to create new policies that protect our privacy rights. As a developer, you must be aware of the implication of your work on privacy and steer clear of such violations.