Most of the population wouldn’t like a fully democratized internet. I am thankful for that because the kind of real and raw democracy available on the emerging peer-to-peer internet can work only as long as the number of participants remains small.
Several years ago HBO had a series called “Silicon Valley” about a tech startup who over the course of four seasons developed an innovative way to democratize the internet and wrest control from all the evil corporations controlling it. It was of course fiction but life often imitates art.
Looking just at social media, we have companies like Meta who want to create an ever more immersive virtual world completely under their control be it ultimately benign or evil. Meta’s vision isn’t completely new, SecondLIfe was the Metaverse of its day.
We also have a loose but growing collective who want to wrest control away from the big centrally controlled server model and replace it with an uncontrolled and uncontrollable peer-to-peer model. The utopian goal being a purely democratized new internet where no person, company or government can impose their will to censor or in any way control how people interact. In this magical land, puppies and unicorns play together in harmony with nature. Too much? Perhaps, but a fully democratized internet isn’t going to happen either.
An uncontrolled and self-regulated peer-to-peer model was the original vision of the internet. The only real difference today is the focus on connecting individual client machines together at the edge instead of centralized servers.
A fully democratized internet beyond the control of any individual, company, group or government sounds awesome to me. But the vast majority of people in the United States and the world just couldn’t use it. Look at Twitter today. Twitter has everything it needs to be a democratized platform. You as an individual have the ability to block or mute anyone who posts anything you do not like or do not want to read. You can even control who can reply to anything you post. But, few fully embrace these tools and use Twitter that way. Instead, these users believe they have a right to prevent everyone from seeing and reading things they don’t like. They profess a belief in freedom of speech but in practice they only believe speech they deem appropriate should be allowed. These people could not function within a fully democratized Twitter because they lack the ability to just walk away. They have to exert their control over another individual and Twitter provides easy means to do it. It’s rather ironic considering these same people claim to embrace democracy when it suits them.
I’m using Twitter as an example because the behavior pattern is easily observed there. But it’s not much different on Facebook, LinkedIn, Gab, Parler, Reddit or any other platform including Trump’s newish Truth social network. There is a lot of money to be made in providing safe spaces to people and feeding them advertising in the process. For that reason, social media as we know it today isn’t going anywhere any time soon.
As you read this I ask you to think beyond this as a right or left thing or even a liberal or conservative thing. Think of it within the bounds of emotional maturity … and sadly most people in the world today lack the levels of emotional maturity seen in previous generations.
Today’s social media isn’t a forum where real debate and discussion takes place. It is a world where trolls bully and taunt people while others seek to control the expression of opinions they don’t like. Today’s social media provides a very distorted view of society where extreme and isolated events are hyped to incredible levels. Few on social media actually have any kind of dialog with anyone outside their tight circle of friends and relatives. Mostly, people talk, yell and whine at and past each other while looking for the dopamine hit they get when they think they’ve won some point; forced someone to apologize; gotten someone suspended or banned; or the ultimate trifecta … all three! Trolls are no different, they get their dopamine rush by seeing those they target react to their taunts.
I’m ignoring the tidal wave of bots flooding through all social media platforms each day. They can’t exist on any fully democratized platform, but that’s fodder for another post.
On any site supported by advertising, you are the product. As a result, there is a compelling need to have you return and spend lots of time and attention on a site so you will see lots of ads. This inevitably leads to an ever tightening echo-chamber for the users and a target rich environment for trolls. Sure the platforms all give lip service to avoiding user echo chambers, but it’s in their own interests to provide users what they want and only what they want so they will return over and over again.
The democratization of the internet isn’t going to eliminate any of this. You can’t get the kind of dopamine hit on a democratized social media platform you get on Twitter for example. You won’t have the means to exert your control (real or perceived) over another individual. You can block someone posting things you don’t like from passing through your peer node. If you are part of a self-governing group of nodes you might even get to vote on whether the group also blocks them. But you won’t be able to stop them posting what they want on their own node. You of course have the courts available if what they post is untrue and defamatory or outright illegal in some way. But, there won’t be some central group you can complain to and get them to do something about it on your behalf.
This is why a democratized internet isn’t going to displace the centralized internet any time soon, if ever.
The same is happening with search and information delivery in general. The major search engines have spent decades refining their algorithms to show you the search results they think you want to see. They want you to use them regularly to sell your attention to advertisers. Now, these same search engines are struggling with misinformation and intentional disinformation in their results. They could drastically improve this by simply letting users vote content up or down. They won’t.
Distributed search engines operate differently. Instead of an algorithm trying to decide what you want to see and balancing that with what those controlling the search engine want you to see, you have actual people who add links with descriptions and keywords. Then live people vote on the quality of the search results they see for any given search. Because the results are crowd sourced and continually crowd refined, you don’t see unending lists of clickbait seen around many popular search terms today on the major search engines.
But the democratization taking place isn’t going to be reversed either. It is spreading because it exists beyond the control of companies, individuals and governments. I’m on a few discussion forums built on peer-to-peer technology. We don’t have a troll problem because trolls don’t get the reactions they crave. They get blocked and we move on. There are some really great discussions and with widely diverse points of view all along the political spectrum. When disagreement devolves into something ugly, it evaporates as quickly as it forms because people just walk away leaving the miscreants to themselves. At this point one of two things happens, they either comply with the peer pressure and participate by discussion instead of attack or they simply leave.
I don’t foresee a time when the internet is fully or even mostly democratized because thankfully, only a small percentage of any population really want it. We always have had and always will have a bifurcated internet. Interestingly, those who find a democratized internet interesting and even welcoming can easily function in the centrally controlled version as well. But the opposite is just not true.
WordPress is 19 years old. For a code base, that is ancient. I started using WordPress in 2003 after Moveable Type changed its licensing model. WordPress quickly became the dominant player in the Content Management System space and it still is today. But there are other CMS applications out there. Is it time to abandon WordPress for the greener pastures these other systems promise?
In many ways, WordPress suffers due to its success. WordPress can be extended and modified to handle many things. The extensive library of themes and plugins make it possible for WordPress to be a blog, a marketing site, a sales landing page, an eCommerce site, a forum, a portfolio, a membership site and even you really contort it … a social network.
The median quality of the available themes and plugins is sketchy at best. Creating a plugin is deceptively easy … and therein lies a huge problem. Any given plugin has the potential to create a security or performance problem … sometimes both. Unless you having the training or expertise to look at the code behind a plugin or theme you are plying with fire every time to add a new one to your site.
The rating system helps of course, by using themes and plugins from reputable groups with a large user base you are far less likely to have a problem … with their plugins and themes … but what about interactions between theirs and someone else’s?
The PHP team added namespaces in version 5.3 in 2009. The current GA version of PHP is 8.1. Yet, thirteen years later there are still plugins and themes not using namespaces. This is insane and demonstrates a fundamental problem you face as you build, use or manage a WordPress based site using anything beyond WordPress core.
But WP core doesn’t really fare much better. It still relies on way too much loaded globally and doesn’t require themes and plugins to use namespaces at all. WordPress will allow you to namespace your code, but it doesn’t expect or require you to do so. After 13 years, any legitimate backwards compatibility arguments are long past their expiration date. WordPress still not using namespaces throughout its own code base and forcing theme and plugin developers to do the same makes the WordPress ecosystem less secure than it has to be. It is one of my top peeves about WordPress.
In spite of all of the above, there is only one type of site I will no longer build, manage, maintain or support and that is an online store.
WooCommerce is an awesome plugin from the same people behind WordPress. It is well written, reasonably secure, performant and has a ton of useful features. But if you have an online shop built using WordPress and WooCommerce and you use anything from any other vendor, you are running on borrowed time. All it will take is for one of the other plugins or theme to push up an update introducing a security hole and everything is undone. For me, the risks are just too high and you aren’t really saving any money verses using a dedicated online shop like Shopify. Obviously, there is a market for WooCommerce and obviously for whatever reason some people prefer to host their own store on servers they control. I’m not one of them and I no longer manage WooCommerce sites for any of my clients. I do have clients who have eCommerce functionality in their sites. But, WordPress isn’t used to handle the details.
There are also use cases where WordPress is just over qualified such as a static site. It can do it, but there is just no reason to introduce the complexity of any CMS to create a couple of pages of static content for some situation. Just use HTML and CSS and move on.
WordPress still shines best when used for things like blogs, marketing sites, information sites, knowledge bases, support sites, sales funnel sites … basically anything that can make really good use of a content management system.
Of course there are CMSs out there built around much more modern architectures who do not allow their code to run on older and less secure versions of the languages used to build them. But, it feels like you can’t throw a rock without hitting someone who has at least some familiarity with WordPress. That means you can usually find someone to help you get over a hurdle. It might not be an elegant solution, it might have some performance problems, maybe even a security hole or two but it’ll work.
For me, WordPress isn’t my choice for static sites, web applications or online stores. I didn’t talk about it above, but I don’t use WordPress to build web applications because it isn’t the right tool for the job. But for any other content heavy site … WordPress is an excellent choice.
So, is it time to abandon WordPress? … Not for the things it still does well.
In the past when I have had sites like this I’ve always spent a lot of time at the begging to lay out the site and make the page and post templates just right. I was always letting perfect beat down good and in the process it was hard for me to focus on the content.
This time around, I forced myself to do it differently. When I a brought this site live a few weeks ago, it was just plain. No styling, no images, nothing. I told myself I would do those things once I was far enough along for posting to be a habit. At least somewhat of a habit.
I think I’m at that point. I have a pretty good string of published posts and enough started and waiting for me in draft mode to keep me going for a while.
Over the last couple of days I’ve spent some time in Divi doing a little theming. I haven’t done all the pages I normally would. I might not ever do them. That’s kind of the point of the way I am approaching this site. It’s going to be perfectly imperfect … and I’m just fine with that.
But, I added a header and a footer and themed the posts page to handle the featured image and have a consistent look and feel. I’d ask you to tell me what you think of it in the comments but the only comments posted here are from spambots and honestly I don’t really care what anyone thinks of the theming and styling choices I’ve made.
This site has one major purpose. For me to collect things, ideas, best practices, etc. If someone happens upon here and finds something useful … awesome … but I’m not counting on it.
Don’t think this is in any way a slap against Laravel or the great work Taylor Otwell and his team are doing. It isn’t. Laravel Sail is an awesome advancement making it faster and easier for any user to start developing a web application using Laravel. Taylor and his team have made it literally brain dead simple to get a new project up and going using Sail.
I will never add Sail to a complex existing code base again.
First, some background …
I’ve used Laravel Valet for a very long time and it has been awesome. It’s officially supported only on Mac OS but there is a version for Linux too. It’s great. You can point Valet to a directory where you have all your projects and it does some incredible magic behind the scenes. You end up with your project directory acting as a .test domain to use for your development and testing. It can even use a different version of PHP for all sites under it or even a different version for a specific site. Truly awesome. But it does rely on what you have installed on your local machine.
Sail is different. Because it runs in containers, everything is completely isolated and can easily be wildly different from what you have installed. In fact, to use Sail, all you need installed is the operating system and Docker.
I have one client who has some of their data living in Microsoft SQLServer and I have to access it from PHP using Laravel to provide the functionality they need. Now, this isn’t a problem. The MS ODBC drivers and the SQLServer libraries for PHP are available for the Mac and the version of Linux running on their test and production machines.
But it’s a pain in the ass to install them and they have to be reinstalled every time PHP is updated. Applying normal Linux updates usually includes PHP updates and upgrades. If you have these applied automatically, then you also need a process to detect a PHP update or upgrade and either try to update the MS drivers and libraries automatically or notify an administrator to stop everything and do it manually.
Just for the record, automating it is hit or miss at best. Sometimes it will work but most of the time some manual intervention has to happen. I could make some snide comment but it’s pointless. There is a better way. A much better way if you use containers in production.
I’ve had this client for a long time and even though their servers have been through updates, upgrades and even replacements, I never took the time to move everything to containers in development or production. It is one of my goals for this year and I was excited at the prospect of Laravel Sail getting me there on the development side quickly and easily.
It was not meant to be.
Adding Sail to the project was easy with a simple composer command. Installing it was easy too. Even downloading and running the images were all very easy. But, that’s where the trouble started.
When Sail asked what I wanted setup, I did not choose MySQL because this existing code base uses existing development databases in MySQL installed on my Mac. These mirror the databases in production and are used by multiple applications existing in different code bases.
After everything came up, I tried to access the application in my browser and was greeted with a message saying it couldn’t connect to MySQL. No problem, I thought. I just need to update the host so the container can see MySQL on my Mac.
That’s when I noticed Sail had skull fucked my .env file. It had updated all of the mySQL connection information to use a mySQL instance that did not exist in the containers Sail was using. It changed all of the DB_HOST entries and changed all of the DB_PASSWORD entries to “password” for both the mySQL and the SQLServer connections.
Nothing in the documentation says Sail is going to “help” you in this way and there were no warnings, no asking, “hey, it is okay if we much around in your .env file?” Nope, it just did it without creating a backup, btw.
I spent a little while teasing through the numerous changes made to my .env file but after about half an hour I decided to just roll back and abort this fiasco. I rolled back the code in git and restored my .env file from a Time Machine because it isn’t tracked by git.
My biggest concern was I would forever be fighting these special edge cases because of the way this code base has evolved. The ultimate goal is a re-write of this anyway so I will just do what I originally planned to do and spin up an Nginx and PHP-frm container with the ODBC drivers and SQLServer php libraries and just move on. When I start a clean page rewrite, hopefully later this year, I’ll use Sail at that time and should have no problems.
I still love Sail and will use it on a project that is isolated and preferably stand alone but I will never use it in a complex existing project again.
The United States landed two men on the moon 53 years ago today.
Had we continued our space program at the time instead of caving into the left and basically shutting it all down, just imagine where we would be now.
We would have a real permanent space station above the Earth at a much higher orbit instead of the pitiful ISS in low earth orbit with a limited lifespan.
We would have factories in space producing semiconductors, pharmaceuticals and other goods of higher quality and at lower prices than we can hope for today.
We would have solved the renewal energy problem and our reliance on fossil fuels would be much less or perhaps non-existent.
Yes, the world would be a much different place if we had continued our space program with the same intensity and the same commitment as we had in the 1960s.
I still remember where I was and what I was doing on the afternoon of July 20, 1969. I still remember watching those first fuzzy black and white images from the moon. At that moment, the United States and all of mankind had a wondrous possible future starting to unfold.
Then we turned our backs on it and the world is worse off because we did.
Microsoft Internet Explorer was possibly one of the biggest fails I’ve seen in my career. It is (soon to be was) a product based on extremely mature standards freely available to anyone who wanted to use them to develop a product. The expectation is all browsers meet the standards. IE failed at that task in many spectacular ways.
To their credit, Microsoft learned from their previous failures and the Edge browser is based on Chromium, from Google. The Edge team seem to have absolutely nailed it this time. Edge is fast, reasonably secure (I am still running some extensions to close some holes), and unless you are using IE mode (just don’t), it meets all the modern standards you would expect from a browser in 2022.
I have started playing with it and so far, I have no issues with it. Because it is based on Chromium the developer experience is essentially the same as Chrome. So, it’s been pretty easy to do a head to head comparison between Edge and Chrome for debugging VueJs front end applications.
My only concern is Microsoft is Windows focused. While it’s true they have started embracing more and more of the Linux ecosystem but Windows is still their flag ship product. My concern is how much attention they focus on Edge for other operating systems over time. We will see but for now, Edge is a solid, first citizen, browser.
Software developers either welcome every opportunity to refactor code they work on or they avoid it like the plague. There is a tremendous amount of research proving developers who aggressively refactor working code are better software developers. They produce better code with fewer bugs and it is easier to maintain as time goes by.
Recently, I read an article talking about why developers hate changing programming language versions. The author raised a number of valid concerns but didn’t talk about how all of them are addressed if you use a modern development workflow. If you’ve been writing code for decades you quickly embrace tools and ways of dealing with different language versions, library versions, operating system versions, and on and on. It’s just a fact of life for developers today and there are a lot of documented tools and processes to help you and your team deal with this.
But what caused me to stop, shake my head and say, “Man, I wouldn’t want to try to fix a bug or extend a feature in their code,” was when I read this paragraph …
Developers call the process of editing old code “refactoring” and it’s a process that commonly introduces new bugs or performance problems. So that’s why, going back and editing old code, well – that’s the last thing most development teams want to do, particularly when the existing code base is running stable and doing its job.
Sadly, it’s mostly true. Most development teams do try to avoid refactoring code especially when the code is working as expected. But they shouldn’t avoid it, they should embrace it because over time the quality of their code base incrementally improves. And they lessen the heavy lift that can come from a version change anywhere in the food chain of their application.
If you ask a developer or a team of developers why they avoid refactoring working code they always cite the risk of introducing new bugs. It’s perfectly reasonable to presume humans, which we developers are contrary to what many believe, will make a mistake. It’s possible a developer could introduce a new bug.
But, if you use automated unit testing during your development cycle you can easily mitigate that risk. If you also use automated end-to-end testing in your development cycle you can pretty much eliminate that risk.
I’ve had this discussion many times with new and seasoned developers. Often they will lament that it would be easy to do that if those tests existed but they don’t so it’s much safer to let sleeping dogs lie and refrain from refactoring code.
My response? So write some unit tests before you start refactoring. Write some unit tests that at least exercise the expected path. If you have time, go for some exception paths too, but something is better than nothing and being able to confirm the code works as expected when the inputs are as expected is a huge step forward. You don’t have to write tests for the entire application, just the ones you need for the specific code you are refactoring in its current state. Once you have those, then you can go to work with confidence your refactoring is solid and works as expected.
About now I usually hear, “That takes up too much time. When a bug is reported we want to get in there, fix it, clear the ticket and move on. My manager would write me up for wasting time like this.” Which makes me grateful I don’t work in a software factory, btw.
I point to the studies showing incremental refactoring tends to reduce the number of bugs reported in production code over time. These studies also show the code base to remain more stable and have fewer all hands on deck to upgrade this or that cycles. Development teams embracing incremental refactoring are more productive, produce better code and experience lower turnover.
It is much easier to find and fix a reported bug if unit and end-to-end testing is in place … which is why you must add both to existing code bases that don’t currently have them … but you can do it over time. That’s what incremental means.
The benefits of unit tests during development are extensive and well documented. The benefits of unit tests when a bug is reported are at least 100x more than during the initial development. Well written unit tests mean someone with little or no familiarity with the code will have an easier time fully understanding what the code is supposed to do. Even if you wrote the code six months or six years ago, it will help you refresh your memory about it. If you have ever had to debug code you wrote long ago or code written by someone else you know what a pain it can be to just understand what it should do, let alone what it is doing.
If you lead a team of developers not writing unit tests and not doing incremental refactoring, push to start. There will be a lot of momentum against you but over a short period of time that momentum will turn in your favor. If your management sees this as a waste of time, convince them otherwise. That might mean just quietly doing it without their knowledge.
If you are an individual developer, make writing unit tests a habit. If the tests are missing, write some before you ever change a single character of the source code. When you have tests for the happy path write tests for the bug or change you are about to address. Every minute you spend writing a few good unit tests will save you 10x, 100x or more down the road. You will be more productive. Your code will be better. You will be less stressed when your changes are released. Your teeth will be whiter … well … you get the idea. Good unit tests and an incremental refactoring habit are very good things.
Too harsh? Maybe you won’t think so once you read by thoughts on this.
There is very little green about using solar panels to produce electricity. About 80% of the average solar panel is made from recyclable materials but actually recovering those materials at the end of a panel’s useful life is time consuming, labor intensive and expensive. And even if we solve the poor economics of recycling used solar panels you are still left with toxic waste representing about 20% of each “recycled” panel.
In every state in the US, this toxic waste requires special handling, transport and storage. You can’t just toss it into municipal landfills because the heavy metals will eventually dissolve and leach into the ground water.
Used solar panels are a looming ecological disaster.
There are other downsides to increased solar panel use.
They alter the local climate … and by alter, I mean they increase the temperature of the air surrounding them. If Al Gore wanted a real example of man-made climate change … massive solar farms would be an easy target.
To operate efficiently, solar panels need a mostly unobstructed view of the sun. So you can forget about having trees shading your home in the summer time if you want solar panels on your roof.
And don’t forget, or maybe you didn’t know, trees are the best carbon sinks on the planet. Cutting down trees in favor of solar panels will lead to an even bigger ecological disaster than used solar panels themselves.
But … Wind will save the day!
Not so fast. Aside from decimating bird populations faster than DDT, wind turbines depend on using the energy of the wind to rotate a turbine. But nothing comes for free. The laws of thermodynamics still apply … even for green energy sources too.
That means for the wind to turn the turbine it has to give up some of its energy. Yep, that means each turbine slows the wind turning its blades by some small amount. Install enough turbines and we will see the air quality in cities east of wind farms decline because the wind energy previously pushing the pollutants out of a given area is reduced and can no longer push the crud out of the cities.
Yes, it will take a lot more wind turbines to slow down the wind enough to cause problems than we have today … a whole lot more. But it will also take a whole lot more turbines to produce a significant portion of the electricity consumed each day in the US.
Hydroelectric could help.
Hydroelectric power is by far one of the cheapest ways to produce electricity. The payback of a new hydro plant is pretty short and they operate indefinitely with proper maintenance. But building new dams and hydroelectric plants isn’t really in the cards thanks to the EPA and a multitude of environmental groups.
This leaves us with this stark fact …
Until we get serious about using nuclear energy, we aren’t serious about replacing fossil fuels.
I have no interest in debating the safety or efficiency of nuclear power plants. The evidence is available to even the most casual observer. Nuclear power has an astonishingly good safety record. In the United States there has never been a release of radiation from a nuclear power plant. Nothing like Chernobyl can happen here because of the way US power plants are designed, built and operated. The Three Mile Island accident was more hype than fact but it proved the US design is safe … even when something does go wrong.
Every nuclear power plant in the US has the ability to store spent fuel rods and contaminated control rods indefinitely. We also have a huge long term storage option in a deep salt deposit in the south western US. We have the knowledge, technology and ability to safely transport spent rods from any plant in the US to the already built storage facility. Once there, the chance of the spent rods causing any kind of ecological disaster is zero. In fact, nuclear power is about the only even remotely green renewable energy source we have today.
The public fears nuclear power because they have been misled about it and I am certainly not going to remove those fears with this post. This post probably wont even be read my many people at all.
But if just one person reading this decides to take just a little of their free time to learn just a little about nuclear power in the US … maybe … just maybe, they will talk about it or write about it and then maybe … just maybe, someone else will then do the same.
Once upon a time, the state of the art was to run a web server, usually some form of Apache, on a physical server exposed to the internet. Once virtual machines came along and SSL/TLS became ubiquitous and this new and better way became the state of the art.
But they were difficult to secure and keep secured. It took a lot of time and effort to get a server setup and virtualization didn’t do much to reduce the time and effort required. You still had to stay on top of updates, upgrades, security hot fixes and on and on. Lots of automation and infrastructure as a service tools came about to address these problems. One of the best in my opinion is Ansible and I still use it today.
Now containers are the state of the art and there are many reasons you should fully embrace them. One huge benefit of using containers is isolation. Containers are isolated from the host operating system and from each other. That means if someone successfully gains control of one of your containers the damage is limited. If you fully embrace the idea that all of your containers are expendable the path to recovery is easy, painless and fast. Another benefit is they are light weight and if you adhere to the single process per container principle your containers are performant and easily deployed and redeployed with minimal effort.
You can rollout new versions of applications and supporting infrastructure with little to no downtime. If you have multiple copies of a given container you can employ a rolling update ore replacement strategy with no user facing downtime at all. If your containers are behind a proxy like HAProxy or Nginx then you can bring up new containers with your updated applications then let the proxies find and start using the new containers then simply discard the old ones. Pretty cool.
This ability has done more to advance the adoption of continuous integration and delivery by devops teams than any other single technology or process because it can be completely automated. Kubernetes for example, takes care of all the details and lets your devops team focus on much more important things.
If you are using Docker containers you get heightened security right out of the box. Previously, only the most dedicated server administrators took the time and effort to apply this level of security to their bare metal or virtual servers. It was hard, error prone and the results were inconsistent at best. But, with Docker containers, you get this heightened security right out of the box … I mean, container. If you are running Kubernetes this level of security isn’t the default but it takes less than 10 lines of code in your pod definitions to match what Docker gives us automatically.
Host server upgrades become almost painless. Just stand up a replacement virtual server, install Docker and Docker-compose if you use it, mount the persistent storage shared with the currently running containers in the server you are retiring. Bring up new copies of the containers on the new virtual server. Update any load balancer and/or reverse proxies, if they aren’t configured to automatically use the new resources. Run your end-to-end tests and if everything checks out turn off the previous virtual server and either archive or destroy it. If you use a tool like Ansible it can all be automated. If you are running in the Kubernetes world it’s even easier. It’s just a matter of adding new resources to a cluster and retiring the resources you want to stop using. No muss, no fuss.
These are all great benefits when you’re dealing with your sites and applications in production. But where containers really took off is with developers. Integrating containers in your development workflow makes it easier to deal with different versions of languages like PHP. PHP is very popular across the web and it isn’t uncommon for a web developer to maintain code bases running different versions of PHP while working on a new project using the latest and greatest version available all at the same time. The same is true for environments like NodeJs and even frameworks like Laravel, React and VueJs.
If you have some special needs in one or more projects, you can address it in the containers. For example, I have a client with multiple projects and almost all of them need MS ODBC drivers to connect to MS SQLServer. It’s not at all a problem. They are included in the PHP container and automatically updated as needed. Again, no muss, no fuss.
By using containers in your development workflow you can avoid having to install anything other than Docker (I use Rancher Desktop on my Mac), git and your editor of choice, I use Microsoft VS Code. I haven’t setup a new MacBook in over two years, but that is all I would need to checkout a project and get to work. I use more tools and have things like PHP, Node, Composer, Vite, mySQL and a lot more installed on my MacBook but much of that is because two years ago I hadn’t fully embraced using containers in my development workflow. I am about 80% of the way there for all of my projects and when I get to 100% I will probably wipe my MacBook and start over.
Extensive use of containers in development and production lets me focus more on solving interesting and challenging problems and less on infrastructure and security threat footprints.