Commercial software development

Here are my attempts to find the angles from which commercial software development does not look quite as miserable as it does to an unprepared observer.

Purpose

Doing something you do not see a purpose of can be boring, tiring, and/or stressful.

Commercial software is what most programmers have to work on in order to generate income. Roughly equivalently, it is the software we do not write for free. It is easy to view much of it as boring, useless, and/or scammy; it is also easy to keep writing on how bad enterprise and other commercial software is, and speculate on the causes of that, but harder to do the opposite (excluding justifications equally fitting outright scams or spam -- such as "it is valuable as long as somebody pays for it").

Yet it usually is possible at least to sort things from harmful, through useless, to at least somewhat useful in a given context, especially if you are open to using many technologies: it is still better to make something useful using lame technologies, rather than to use nice technologies to make something useless or harmful.

Technologies

Generally the technologies favoured for commercial development are the ones considered "easy", though apparently it is based on complexity of learning the very basics by an unprepared person, and not on advanced and prolonged usage. Apparently it is caused by the goal to get more easily interchangeable workers, which is partially caused by people changing tech jobs often. The latter is said to make the jobs much more specialized: if a person is expected to work just for a brief time, it is more important that they become productive as soon as possible (e.g., if it takes 3 months, and they will only work there for a year, that is almost a 25% hit), so companies tend to hire for oddly specific combinations of technologies (languages, frameworks, even minor libraries and online services).

Programming languages

As of 2022, judging by vacancies and programming language popularity estimates, the most in-demand languages are Python, Java, and JavaScript; perhaps followed by Go, PHP, C#, and possibly some of their derivatives.

Java

Much of the enterprise and other commercial software is written in Java, yet I personally rarely even have JVM installed on desktops or servers. Much of smartphone software is in it as well, with Android aiming Java from the beginning.

Java has a poor system integration, coming with its own tools, infrastructure, approaches, and awkward C FFI. It focuses on OOP, which I personally never liked. Yet it is stable, somewhat easy to learn and use, promoted by Sun/Oracle, compiles into portable binaries. Possibly suboptimal, but a "nobody gets fired for buying IBM" sort of thing: boring, corporate-friendly, old and familiar to many, does the job. Java documentation resembles the language itself, and pretty much any other enterprise project's documentation: reliance on JS for no apparent reason, sluggish, its many-layered hierarchy is a patchwork of rather different documents.

Go

A much newer language, promoted by Google. Composed specifically for commercial and enterprise software development, explicitly targeting inexperienced programmers not capable of understanding a better language (see "Why Go’s design is a disservice to intelligent programmers").

It is not the only language designed to be newbie-friendly (easy to learn, as the authors imagine it) while compromising on being better, but at least this one is not OO, has no exceptions, and is statically typed (though poorly). Go documentation seems fine.

Python

This one I used briefly around 2010, then tried to use and to like it around 2016, failed to (was rather annoyed by it, after trying to use it as a functional language: it is said to be multi-paradigm, but it is bad for FP), tried again in 2021, and using occasionally since, particularly for its nice libraries (SymPy, Matplotlib, SciPy, NumPy). It is OO and dynamically (but at least strongly) typed, with many errors being left to happen in runtime (perhaps that is why Python programmers seem to like TDD), as well as heavily relying on exceptions for control flow. Because of that it feels clunky and not like something you would want for reliable software.

On the bright side, its standard library is nice: I rewrote dwproxy in it, which uses a bit of concurrency and networking, a bit of parsing, work with a database, and path finding, and found pretty much everything needed in the library -- even a telnet client, and a heap queue that helped to quickly implement optimized Dijkstra's algorithm. And even though there is no nice parsing as there is in Haskell, regexps work fine for many tasks. I have also tried to use it for leetcode.com, and it is quite handy: only used C to solve similar algorithmic puzzles a while ago (2007 or thereabouts), and it is a relief to not have to deal with manual memory management; while I generally dislike dictionaries in type systems or serialization formats for not being quite fundamental, it is also nice to have them at hand, instead of making custom hash maps. Compared to Haskell, it is nice that you do not have to worry about running into an algorithm that is tricky for pure functional programming. Python has many nice libraries (most prominently, for statistics/machine learning, numerical analysis, though plenty for physics and other fields as well), and not such a bad language generally -- at least when compared to the ones that happened almost accidentally. The Python documentation is rather nice, and can be easily installed for offline browsing from Debian repositories, along with documentation for many packages.

JavaScript

Nowadays I associate JS with bloat, unnecessarily buggy and sluggish documents, with only rare convenient applications. But I had fun playing with it in the early 2000s, later was excited about its usage with HTML5 canvas, used it a bit for websites around 2010 (mostly for calendars, to add form fields, and load completion options, IIRC). It is easy to recall all the bloat and bugs, but in an attempt to look at the bright side, it is better to keep in mind its nicer and more useful applications: converse.js is snappy and convenient when there is no XMPP client installed, while web-based video conferencing software and online map viewers seem to actually be ahead of desktop ones.

The language itself is pretty awkward, using weak dynamic typing (which, I think, makes bugs more confusing than they would have been in a completely untyped language, like assembly, Forth, or m4), but not too far from other common ones. The infrastructure targets primarily web development, and client-side execution in particular. There is nice JavaScript documentation by Mozilla. And there is quite a choice of frameworks, which are commonly used, and are to blame for much of the bloat.

It can be comforting that most software projects, and especially web UIs, are ephemeral: all the bloat and bugs one introduces in those are not likely to be around for long.

Containerization, system images, deployment

Another technology that is particularly popular in commercial software is containerization coupled with system images; it is used for packaging and deployment, to contain the mess people tend to make (e.g., run everything as root, ignore ACLs and FHS, listen on 0.0.0.0 and trust TCP connections, hog all the resources, store credentials where they are readable by anyone, and so on), and/or to improve portability by shipping all the dependencies with the software. This introduces new issues (inefficiency, how to update all the dependencies, how to manage runtime dependencies from external containers and replace the rest of a regular init system, complicated log monitoring and whether to monitor logs from all the system components inside every container, etc), and you get to deal with those then. Yet it can still be an improvement over common enterprise software packaging and deployments without containerization.

While there are simple chroot + tar, systemd's nspawn/portabled/etc, and LXC, apparently Docker is most used in commercial setting, followed by podman. Often with Kubernetes, which also abstracts out running of those containers, networking, and other tasks. Such abstraction layers may be potentially hard to debug, introducing unnecessary complexity and not reusing existing tools, but it may be appealing to run many interacting processses without worrying about systems on which they would run, especially if one needs to scale a system by a lot and quickly.

It is not technically sensible for deployment in most cases, but apparently more sensible from the organizational perspective. Just like Go's stated goals, or perhaps even closer to Java with its JVM, which tries to abstract out the underlying system. Or JS with using a web browser almost as an operating system these days. Apparently proper system integration has a rather low priority in commercial software, which seemed to be the case before containers as well -- with VMs or even physical machines dedicated to run some awkward software. And once again, compared to those, Docker is an improvement.

Containers are also convenient for legacy software, though even relatively simple systemd ones suffice for that.

Usage of online services (aka "the Cloud")

Use of remote VMs is handy and common even for non-commercial projects, and email managed by others is used even more commonly, but commercial companies tend to push it quite a bit further: analytics (for logs or data in general) seem to be outsourced often, managed storage is acquired separately from computing resources, access to managed databases is acquired separately too, as well as to message queues, and many of much more specialized bits of software hosted by others.

Many of those look scammy and/or unnecessary, once again bringing up the sad topic of unclear purposes.

The services often include awkward issue tracking systems, odd IMs or web chats. That is yet another example of technlogies one may dislike, but others apparently using happily; one may dismiss them as useless while considering everyday usage, but an organization sets a context, in which they do make sense. Likely it works in similar ways with other B2B services as well.

Hype waves

While working on hobby projects, one can easily dodge many of the hype waves: an individual person may see that it is a bad idea, or they may have played with the newly-hyped technology already. I guess that many organizations are either poking everything, to avoid missing out, or announce use of hyped technologies in order to attract investments. It would explain why so many vacancies include technologies hyped at the time, making it to seem like everyone focuses on those.

Somewhat related is the phenomenon some call "CV-driven development", where people take into account experience that is perceived to be useful for future job search while choosing technologies. It looks similar to regular education, as well as to curiosity, but in all those cases technologies that are not the best fit do have a good chance to be used, especially hyped ones at the time, amplifying that even further.

Existential dread

A job is a notable part of life for most people, and consideration of related choices easily makes one to evaluate slightly more general life goals; "do something of dubious usefulness, using awkward tools, until you become old, sick, and finally die" does not sound like a fun perspective. But that is the worst-case scenario, while in practice it is likely to be a somewhat useful (in a human-scale time frame and a context of interest) job, with okay tools, and there is a possibility of fulfillment outside of a job, with a work-life balance.

Or one may try to avoid trying to do something "useful": see, for instance, "how to be useless".