The Role of Programming Language Choice on the Safety and Security of the Software

6 min read Original article ↗

Building foundational security in software development starts with evaluating the languages that are used for building it. Many languages used in the past and today do not have the safety features required to prevent a whole class of vulnerabilities from happening. By adopting safer alternatives, we can be more confident in the security and safety of the resulting software. Examples from the past show that unsafe languages in many cases are insufficient to achieve better security. Examples from modern industry demonstrate that adoption results in better performance and security. Government, research institutes, private sector, and end-users all demand more secure-by-default software. Modern programming languages with safety features could be the answer to building a more resilient cyberspace.

Shifting Left

We might ask ourselves: “Wouldn’t it be sufficient to follow the proper development guidelines and even enforce compliance with the use of secure tooling such that most vulnerabilities will not make it past this security checkpoint in the SDLC?”. This is not true in all scenarios. It is possible to use languages without built-in safety features and develop them in the most secure fashion. However, that doesn’t address the fact that the underlying language could still in theory allow for unsafe access to memory or unsafe process management. No matter how secure and strong the house is built - layers of bricks, strong roofs, weather-resistant doors and windows - if the house is built on top of a swamp, eventually the ground can erode and the “securely built” house could break.

And that is precisely why it is important to address these issues at the level of the capabilities of the language. In cybersecurity, shifting left is what happens when the security of a given feature or an entire application gets ensured earlier in the SDLC process. For example, instead of ensuring the application is secure and functional with the use of QA specialists’ manual testing efforts, it is possible to write a suite of automated tests that will execute every time a change is introduced into the code base. Here we shift the code security checkpoint from later in the cycle to earlier in the cycle.

In software development, the choice of language is one of the earliest things that will happen in SDLC. And that is why it is so impactful and so critical for the rest of the cycle and the resulting product. National Institute of Standards and Technology (NIST) mentions the following about shifting security left:

Most aspects of security can be addressed multiple times within an SDLC, but in general, the earlier in the SDLC that security is addressed, the less effort and cost is ultimately required to achieve the same level of security. This principle, known as shifting left, is critically important regardless of the SDLC model. Shifting left minimizes any technical debt that would require remediating early security flaws late in development or after the software is in production. Shifting left can also result in software with stronger security and resiliency (National Institute of Standards and Technology, 2022).

Security as a built-in Feature

Programming languages with built-in safety mechanisms introduce the concept of security to be a built-in feature. Security is no longer just an afterthought. Modern products have to have it as an essential feature. Customer demand shows that people these days care about their data and activity being secured and not compromised. If one company fails to properly secure their users, these users will find a more secure alternative that the competitors will introduce to the market.

A word on AI

Vibecoding or AI-assisted development often deals with the choice of programming language, whether it be a conversation with AI trying to figure out the tech stack for a given idea, or AI choosing to respond with a script in a programming language of its choice. If you have used most of the major LLMs - ChatGPT, Gemini, or Claude - you might have seen that there is generally a preference for certain technologies over others in responses that these models return. The preference generally lies in which technologies were most heavily used in the training data. The training data comes from public sources like public GitHub or GitLab repositories. And which languages are the most prevalent in the industry by the amount of code that was produced today? Some sources cite JS being in the lead, others - Python. And it is clearly seen in AIs responses - most of the time you get an advice to go with JS or Python for your project.

Using JS or Python is not inherently bad. In fact, these languages have evolved quite a bit since their inception and have addressed a ton of vulnerabilities, and introduced new structures and and secure paradigms. But having this bias in responses leads to a self-feeding loop - an AI recommends you to use JS for your next open-source project, you vibecode the app and publish it, and in the end AI companies use this new repo and thousands of others that were created with JS to train their latest models. This creates a couple of implications:

  • An LLM creates code that is then used to train new iterations of the LLM. Feeding LLMs outputs into its inputs will only lead to eventual poisoning of the model. This has already been studied by researches in the field.
  • LLMs keep suggesting technology based on the data that they are most familiar with. This disregards the cases where a more secure technology might be a better choice for the project.
  • Most projects start to look and feel the same. LLMs are good at predicting. And prediction tends to land in the middle of the road. Therefore, soon all the projects look like they use Next.js, with Shadcn components and TailwindCSS styling, and are hosted on Vercel and Supabase.
  • Finally, when all projects rely on the same tech stack, when a vulnerability is eventually discovered (and this has happened recently with React Native and Metro Bundler), all of these new projects get thrown under the bus and are now exposed to an attack.

Solution?

Do not solely rely on AI to give you suggestions. Seek out expert insight. Any project would benefit far more from a 30-minute consultation with a seasoned software professional rather than a middle-of-the-road response from an LLM. And it is still entirely possible that the expert would recommend you the same approach that an LLM would. However, the expert would have a lot more context for your business problem and experience to back up their claim, which LLM doesn’t have.

AI makes it easy and convenient to take a shortcut and run with the proposed solution. Evaluate your domain. Determine your needs and risk tolerance. Gather professional advice. Make an educated decision.