Internet Freedom in a Surveillance World

Internet freedom is at risk, every day, tech companies gather our information left and right, they supposedly do so to make our experience using their services better, but what happens when the government orders these companies to give them full access to all of their user’s data? Privacy is violated.

“The most overarching and important is the Kantian idea of respect for the dignity of the person. When the self can be technologically invaded without permission and even often without the knowledge of the person, dignity and liberty are diminished.”  (Marx, 1998, 21).

Violating someone’s privacy is literally violating their human rights, dignity and liberty. They are no longer allowed to act, think, or learn in a private environment, everything is recorded. This creates a passive society which agrees with the government in fear of creating a conflict and being seen as a potential threat. The people fear their own government, the government they elected to represent them, is feared by the same voters who put them there, crazy isn’t it?

“The return to a similarly controversial situation in which intelligence services continue to obtain information through ethically questionable or illegal methods poses serious challenges for democracies. Yet the reaction of the public to the more recent scandals has been quite nuanced. While there has been clear outrage among civil society groups and certain segments of the public, the perceived terrorist threat seems to have also led to a surprisingly muted reaction. Polls indicate that members of the public in several countries support mass surveillance as an acceptable method of intelligence collection. Additional surveys even indicate acceptance of enhanced interrogation techniques.” (Martin, 2016, 19).

Governments perform this massive surveillance in the name of safety, however there must be a limit set to how much of our freedom and privacy we are willing to give up in the sake of this “national security”. This information gatherings are already morally questionable and potentially illegal. The government is not applying its own laws to itself, it  is allowed to act as a free entity that can do with its population as it pleases, and people seem to be fine with it, as long as they are not too affected by the goverment’s activities. If you play nice, you get treated nice, nice meaning the way the goverment likes it.

It is sad to see the indifference society has with all their information being available to corporations and governments, see that Aaron Swartz work may not have the impact he expected it to have, most people seem to just not mind being so exposed and watched these days. Being a public person is very common, however we all like some privacy, and this privacy is diminished more and more every day with the frequent privacy breaches. As a society we must fight for our privacy rights and not give way for more and more abuse from part of the government, as we know, information is power, and we do not want to give more power to an entity that is already trying to know everything about us and potentially control our lives.

Martin, S. (2016). Spying in a Transparent World: Ethics and Intelligence in the 21st Century. Geneva Centre for Security Policy. Obtained from: https://www.gcsp.ch/News-Knowledge/Publications/Spying-in-a-Transparent-World-Ethics-and-Intelligence-in-the-21st-Century

Marx, G. (1998). An Ethics for The New Surveillance.The Information Society, MIT. Obtained from: http://web.mit.edu/gtmarx/www/ncolin5.html

Advertisements

On Blogging

I found out after having a lot of late assignments for this class, that having to create blog after blog can be quite tedious, boring and tiresome, but after a while had passed and giving it some thought, I actually kind of enjoyed having to push blog after blog. It felt like I was creating a footprint of what I was learning and I could revisit it any time or share it with someone and tell them “HEY! I LEARNED THIS! YOU CAN TOO!”.

Of course right now most of my blogs are assignment related, but I actually kind of want to blog about other things, I just prefer to play videogames or something. However, if I ever get tired of playing League of Legends or sharing Facebook memes, I for sure will start blogging about anything I like! Sounds like a super fun idea, I am leaving my digital fingerprint on the Internet. I may die, but my blog posts will probably be stored in the internet forever, occupying space in WordPress’ servers without me ever paying a cent, FeelsGoodMan.

I really encourage anyone who doubts about bloggin to just give it a try, it is a really nice way to express yourself and I this is probably the blog I have enjoyed writing the most, because I finally am expressing what I think about blogging itself. Don’t be ashamed of posting something, just post it!

Smart Citizens – Week 8

This post comes after reading DataCités: Data as a commons for Smart City.

I really liked the term “invisible economy”, how we don’t realize that the cost we are paying for all this apparently free services is the constant stream of information we are giving up to these big corporations. One of the biggest problems that all this creates is that our public data is used for private purposes by big corporations, as well as algorithms being given absolute control in possibly important part of our lives, like the insurance example the article gave.

This all undermines the collective ownership principle that Smart Cities wants to portray, “Civil society, too, is caught between the incontestable benefits of these services and a growing uncertainty about the fate of its data. “. This all comes with big tech companies pushing for even less regulation on their data gathering activites, with Silicon Tech companies recently pushing for more ‘self-regulation’ and less legislation in the US.

“Digital intelligence is neither inherently virtuous nor corrupt; however, as efficient as these technologies may be, we must continue to critically reflect upon the type of city we want.” This is very tied to the lack of ethics in the current tech industry, most programmers, I must say that including myself, view users as some kind of dumb entity that will be interacting with our system, granting us some input for us to process and spurting out something for them. I feel there needs to be a class specifically dedicated to professional ethics in the software industry, because there are some classes that do show some code of ethics, like the ACM one, but they just turn into another topic in the course that will be forgotten by students the next semester. Companies need to realize that algorithms making decisions that affect the life of millions of people greatly, need some kind of regulation to ensure their effectiveness, as well as some kind of control that prevents bias, because no matter how impartial one as a developer tries to be, we all have biases and preferences that have grown into us as we have grown up.

Pretty much everything that involves the use of a person’s information for financial gain needs to be regulated. Data is still being treated as a resource you can easily obtain and OWN, as if it was some kind of mineral (data mining, duh).

I’ve written about this in my ethics essay (in spanish):

Smart Surveillance

Check it out, as well as the references for additional information.

Defense in Depth

Defense in depth (also known as Castle Approach) is an information assurance (IA) concept in which multiple layers of security controls (defense) are placed throughout an information technology (IT) system. Its intent is to provide redundancy in the event a security control fails or a vulnerability is exploited that can cover aspects of personnelproceduraltechnical and physical security for the duration of the system’s life cycle.

Ken has always told us that security is pretty much adding layers of protection to our system. In common words, we don’t rely on a single security measure to keep our system safe, instead we keep adding “walls” to make it harder for attackers to gain access to our “castle”. The bigger our castle is, the more attractive it becomes for invaders to come and attack us, that is why we have to create a safe and layered fortress to protect from all these attacks.

There are several DiD models out there, but most of them include the following general categories, listed from outermost to innermost layer:

  1. Policies/Procedures/Awareness/Education
  2. Physical
  3. Perimeter
  4. Internal Network
  5. Host
  6. Application
  7. Data

 

Each of these layers represents an opportunity to incorporate security into the information technology framework to make sure all the bases are covered.  Taking a layered approach to security by including defense measures at each layer can greatly reduce the risk and effects of vulnerability, attacks, and intrusions, saving a lot of time, money, and frustration.

DiD is historically based on a military strategy to increase defenses so that one breach will only lead to more defense measures; this strategy may exhaust the resources of the offense in the meantime, allowing key defensive resources to remain protected.  There are, however, advantages and disadvantages to this strategy.

DiD’s main advantage is, of course, added security.  DiD creates multiple layers of protection so that if one defensive measure fails, there are more behind it to continue protecting the assets.  With this approach, a castle may be able to resist even the strongest and longest of sieges, allowing our system to survive.

DiD’s main disadvantage is complexity.  Implementing security at every layer takes a lot of planning and valuable resources.  It is important to consider if and how security measures can work together as well as the maintenance, administration, and monitoring that is required for each.  We have to take into account the amount of stone, workers and guards we are going to need to build those walls and also maintain them, as well as rebuild them in case of attacks.

Defensive programming – What they don’t teach you at school

“Better to be despised for too anxious apprehensions, than ruined by too confident security.”— Edmund Burke

Most of us at school create our projects to make them barely functional, the minimal code needed for the code to work and be accepted by our professors. This is not and cannot be true in a real work environment. The world we live in is full of threats to the software we create and we need to prepare our code for those threats, we must always assume that there will be someone trying to use our program maliciously or trying to gain privigeled access to it. One of many practices we can use to make our code safer is to use Defensive programming.

Some of the most common practices or assumptions in defensive programming are:

  • Encrypt/authenticate all important data transmitted over networks. Do not attempt to implement your own encryption scheme, but use a proven one instead.
  • All data is important until proven otherwise.
  • All data is tainted until proven otherwise.
  • All code is insecure until proven otherwise.
    • You cannot prove the security of any code in userland, or, more canonically: “never trust the client”.
  • If data are to be checked for correctness, verify that they are correct, not that they are incorrect.

Defensive programming is a subset of defensive design which focuses on always maintaining the product working and available no matter the circumstances. Defensive programming also divides into many categories, one which is Secure coding.

Secure coding is the practice of developing software in a way that guards against the accidental introduction of vulnerabilities related to security. Some of the most common vulnerabilities that are introduced during the coding process are:

  • Buffer Overflow Prevention: Happens when a process tries to store data beyond a fixed-length buffer. When this overflow happens, the overflowed data may overwrite data in the next memory location, which isn’t part of the buffer. This can result in a security vulnerability like stack smashing or program termination.
  • Format String Attack Prevention: When a malicious user supplies specific inputs that will eventually be entered as arguments to a function that performs formatting. This can crash programs and also delete entire databases, like with SQL Injection, which occurs when input for a SQL database is not properly cleaned, which allows an user to input commands directly to the database, possibly allowing him to drop tables from the database and delete information.
  • Integer Overflow Prevention: Occurs when an arithmetic operation results in a number too large to be represented with the available memory space defined for numeric datatypes. A program that does not check for overflows introduces potential software bugs and exploits.

I believe this a very important way of programming that ALL of us need to apply when we start our professional careers, as everything needs to be as protected as possible from outside attacks. I think it is kind of sad that this way of programming, of adding layers of security to our software and how to do it isn’t taught directly on most schools and has to be learned by every graduate once they start working. There should be more classes focused on the real world work environment, how it works and how we can be successful in it.

Hasta la vista baby

Hey Ken, how you doin’, It’s late over here and this post is kinda overdue but here I am typing. I hope you are having a good day/night when you are reading this because I sure am writing this post from the bottom of my heart. I really like the way you teach your class, the freedom you give and how it really makes you think about the system that is currently implemented in our school.

Honestly what I really love about the class is that, any topic, honestly ANY topic that comes up you can say something about it that lets us learn a bit further. That is something that no other teacher I’ve seen has, true interest and knowledge in topics beyond what the course is supposed to teach us, how you set up calls with people that have been in the industry for years and letting them tell us their experiences is something no other professor has ever done. I truly appreciate everything you do for the Computación community in Tec and hope I can see you around and maybe even in class again. I wish you the best Ken and I truly will never forget this experience, thank you so much.

And yeah you already saw the video and many of my classmates posted it as well BUT HERE IT IS SO YOU CAN WATCH IT AGAIN

 

Verificación y validación de software

Es un conjunto de procesos de comprobación y análisis que aseguran que el software que se desarrolla está acorde a su especificación y cumple las necesidades de los cliente.
Los objetivos de las actividades de verificación y validación son valorar y mejorar la calidad de los productos del trabajo generados durante el desarrollo y modificación del software.

Los atributos de la calidad deben ser la corrección, la perfección, la consistencia, la confiabilidad, la utilidad, la eficacia, el apego a los estándares y la eficacia de los costos totales.

Hay dos tipos de verificación: formal y del ciclo de vida. Esta última consiste en el proceso de determinar el grado de los productos de trabajo de una fase dada del ciclo de desarrollo cumplen con las especificaciones establecidas durante las fases previas. La verificación formal es una rigurosa demostración matemática de la concordancia del código fuente con sus especificaciones.

La verificación y validación implican la valoración de los productos de trabajo para determinar el apego a las especificaciones, incluyen las especificaciones de requisitos, la documentación del diseño, diversos principios generales de estilo, estándares del lenguaje de instrumentación, estándares del proyecto, estándares organizacionales y expectativas del usuario, al igual que las meta especificaciones para los formatos y notaciones utilizadas en la especificación de productos diversos.