Replacing Software Stacks Is Never The Solution

Mattias Geniar, Monday, December 22, 2014

This tweet referred to the blind replacing of the ntpd daemon by alternatives, such as tlsdate and OpenNTPD, as a result of the vulnerabilities found in ntpd.

While I am at no point talking down the security risks and the impact of those ntpd vulnerabilities, especially combined with the recent CVE-201-9322 that allows local user privilege escalation in recent RHEL/CentOS kernels, it is not worth completely abandoning a service overnight and blindly running to an alternative.

For instance, I saw a number of tweets with "suggestions" to fix these vulnerabilities with the following one-liner.

apt-get remove ntp && apt-get install tlsdate

This indeed removes ntpd. And it indeed installs tlsdate, which does not have CVE-2014-9295. Short-term, yes it's a fix.

But you may no longer realise this, as most of it is automated/abstracted away behind a config management system of sorts, but ntpd is a crucial part of your server. It's as important as DNS resolving.

Should you really just replace this with a piece of software you don't know? Are you monitoring tlsdate? Did you configure tlsdate properly? Do you know how to troubleshoot tlsdate? Did you finetune the tlsdate configs to your needs? Do you have years of experience with tlsdate, as you do with ntpd?

This doesn't only apply to ntpd, but the recent endeavours of the OpenSSL to LibreSSL fork as well. Why is it that as soon as a security vulnerability is found, everybody jumps ship to an alternative, without investing the resources to fix the problems in the first place? Do you really think the alternatives don't have security loopholes?

Besides the shortsighted tweets and remarks, there are valid, well-supported arguments for migrating away from NTPD. You know, thoughts that don't just occur overnight.

But forking projects and replacing crucial services without rational thinking only creates a greatly fragmented landscape in the open source community that nobody benefits from. And I'm aware that some projects are flawed by design, especially since they were designed over a decade ago. But even those projects can receive patches, bugfixes and refactored code to improve the quality.

The only time you should abandon a software project is after careful consideration of the alternatives, have experience with it in a test-environment and you know how to monitor, secure and debug said new software. Not the day after a vulnerability release as "a fix" to the problem. Abandoning a software stack is (almost) never the solution.



Hi! My name is Mattias Geniar. I'm a Support Manager at Nucleus Hosting in Belgium, a general web geek & public speaker. Currently working on DNS Spy & Oh Dear!. Follow me on Twitter as @mattiasgeniar.

Share this post

Did you like this post? Will you help me share it on social media? Thanks!

Comments

Tom Van Looy Monday, December 22, 2014 at 22:52 - Reply

I totally agree with the don’t blindly jump ship remark. But I don’t agree with your forking point of view.

So you say that a “community” of people will happily work on large, completely outdated, legacy codebases? I actually don’t believe that it works that way. If I remember correctly, in OpenSSL there was like 1 developer doing some work. When I see the amount of crap (ancient compilers, platforms and crypto and “arguably” even entire backdoors) that were removed and cleaned up in LibreSSL I get sad that most of my software is linked against OpenSSL.

In fact, OpenSSL was forked some time ago now and a few days ago I read that they linked CFEngine against LibreSSL instead of OpenSSL and that made the Valgrind output drop from 600K to 2,5K. I don’t know about you, but for me, that justifies the fork.

OpenNTPD happened because motivated developers wanted to cleanup NTPD and their work was rejected. That was 10 years ago. OpenNTPD is secure and today NTPD is found to be extremely vulnerable. NTPD is also 66 times the size of OpenNTPD, so just on a lines of code basis, there are a lot more bugs in it. Possibly exploitable. So now, even when PHK himself says “fsck this code, I’ll rewrite it all” you say that the OpenNTPD rewrite was not justified?

Even OpenSSH happened for the same reasons. Created as a fork, partly because of security problems with current SSH clients. We all have secure SSH today because of that.

X.org runs as root. Nobody is fixing this, instead they focus on adding transparent windows, eye candy and cloning Apple. OpenBSD has a privilege separated X.org for some time now, why is this not merged? They did the work, you can just get their code.

The least you can say is that different people have a different focus. But I think diversity is very good and stuff should get forked every now and then. Especially with this kind of critical network facing daemons.

Ubuntu is also just based on Debian and they are doing cool stuff. They help spread good ideas. I hope Devuan also becomes a successful and healthy project. Competition can even benefit projects, like what is happening with HHVM / PHP7.

So, I don’t see why you are so against it.


    Mattias Geniar Monday, December 22, 2014 at 23:07 - Reply

    You have very valid points. Most of them, I can’t refute.

    If I remember correctly, in OpenSSL there was like 1 developer doing some work. […] I get sad that most of my software is linked against OpenSSL.

    Exactly: most, nearly all software, is compiled against OpenSSL. It’s this 1 developers’ fault for not accepting patches. If everyone who contributed to LibreSSL would have had their patches accepted by OpenSSL, the software world would be a better place.

    Instead, we have 2% of the packages compiled against LibreSSL, and the 98% other packages still compiled against the old, legacy, OpenSSL. Why? Because changing isn’t easy. Changed API and function calls require code changes.

    If only all patches were accepted by OpenSSL … In fact, I honestly believe if Google put in as much effort into OpenSSL as it did in LibreSSL, the library would have come out stronger. Instead, they took it as a marketing opportunity to be the company “to fork OpenSSL and make it better”. At the expense of all other software …

    OpenNTPD happened because motivated developers wanted to cleanup NTPD and their work was rejected. That was 10 years ago.

    That’s a fail of the NTPD project that cannot be justified. They should have worked together. They didn’t. The community loses. Because now there are 2 packages to maintain. 2 packages to “package”. 2 codebases to maintain.

    I don’t advocate a dictator regime with only one “winning” side, competition is good to keep everyone on edge and motivated, but there are cases where that just doesn’t pay of. Especially in the recent Devuan fork. It’ll divide the Debian world and it’ll only make things harder for sysadmins to manage all distro’s available.

    The least you can say is that different people have a different focus. But I think diversity is very good and stuff should get forked every now and then. Especially with this kind of critical network facing daemons.

    Diversity is indeed good. My main problem with forks, and this year especially, is that everyone just seems to fork everything. It gets some love during the honeymoon period of the project when everyone hypes about it, but 2 years later all that development effort is lost because the projects aren’t maintained anymore.

    Look at all the Nagios forks. How many survived? I don’t want to see that happening to important libraries like OpenSSL, NTPD, … if that happens, we all lose.

    But I hope I’m wrong, and LibreSSL and OpenNTPD survive and come out as the winners. Time will tell. :-)


      Tom Van Looy Monday, December 22, 2014 at 23:17 - Reply

      I don’t beleive OpenNTPD will not come out as “the winner” because they don’t have the best timekeeping (and the portable version is years behind what is in base anyway). Their focus was different. Security, maintainability and “reasonable” timekeeping.

      It’s true that it’s sad that there are now 2 codebases. The bad news is that PHK is now starting yet another codebase, instead of improving OpenNTPD. It all seems like a big NIH show and that is indeed very sad.


      Theo de Raadt Monday, December 22, 2014 at 23:36 - Reply

      I have been involved in this scene for a very long time, and quite frankly that is a bunch of socialist tripe, this assumption that the best way to move foward is if we (who do the work) do so in a centralized model. The problem apparently is “Why can’t we all get along”!

      Over the last 10 years, the OpenSSL team refused help with their source tree. From OpenBSD people. From others. As programmers, the OpenSSL team protected the ecosystem they know, because it suited their needs, but over time has added up to be at the expensive of the community. We see what happened. There will be more, unless a proactive approach is taken.

      About 12 years ago, the NTP team also refused help from OpenBSD. We tried to push minimal pruning diffs upstream, especially work towards a very minimal privsep. They did not want such things, thus OpenNTPD was created. Many people and projects adopted OpenNTPD in non-OpenBSD roles, so obviously the NTP codebase lost some attention. The people who switched to OpenNTPD were often security-critical people, who now had no itch to scratch since their needs were worried.

      So like OpenSSL — they protected the codebase they know, to their own advanges. Eventually even they don’t know the crud in their codebase. As time goes by, noone audits the code to verify it is following newer best practices, because the effort is too high. The community loses.

      (What is the most important criteria for a NTP daemon? That it doesn’t get you holed. Second criteria is that it keep good time. What is the most important criteria for a SSL library? That it does not get you holed. Second criteria is that it does SSL transactions well. What is the most important criteria of a car? That it does not kill you by exploding or such. Second criteria is that it gets you from point A to B).

      Much of this is the result of certain old projects not adapting their code to the modern world. There are a few more coming down the pipe like this, just watch.

      I have no idea where you are coming from. Arguments like yours are the reason why so much software stays in the dark ages. You appear to believe innovation should be centralized, no let me be accurate — you believe it can be centralized. Hogwash.

      You go way too far when you say “cannot be justified”. How astoundingly selfish of you to decide how people should spend their own time.


      Mattias Geniar Tuesday, December 23, 2014 at 09:44 - Reply

      Hi Theo,

      First of all, let me say thank you for taking the time to reply to my little blog. I seriously appreciate all your efforts with OpenBSD and OpenSSH – the Unix world wouldn’t be where it is today, if it wasn’t for you.

      You raise a lot of arguments why a centralised, “communistic”, approach wouldn’t work. I understand your points, and in reality – I indeed see it failing. But the truth is that this new approach, where everyone goes their own way, isn’t working either.

      You go way too far when you say “cannot be justified”. How astoundingly selfish of you to decide how people should spend their own time.

      True. People should do what they want, with their own time. As I have the right to my opinions on how people actually spend it. Is is better to have 10 people work on their own version independently, to have 10 separate software versions, or would it be better to have 2 version where 5 developers can work on?

      Because lately, every major project has been forked above and beyond, and the landscape we’re heading to is one with 10 developers each working on their own project.

      About 12 years ago, the NTP team also refused help from OpenBSD. We tried to push minimal pruning diffs upstream, especially work towards a very minimal privsep.

      If this keeps happening, yes it’s a valid reason to fork the project and start / maintain your own. You have a team of dedicated and skilled developers, ready to take over. Most forked projects absolutely don’t.

      I also understand you want to protect the OpenBSD ecosystem.

      Much of this is the result of certain old projects not adapting their code to the modern world. There are a few more coming down the pipe like this, just watch.

      Then I certainly hope open source maintainers and project leads take the diffs/PRs from other projects serious. We do not need yet another fork of yet another major piece of the operating systems, it won’t benefit the Unix/Linux/BSD community at all.

      Maybe I’m naive in thinking we can all get along. But I do hope it’s possible.


        Theo de Raadt Sunday, February 1, 2015 at 22:10 - Reply

        I also understand you want to protect the OpenBSD ecosystem.

        A snide insertion.

        What did you hope to gain with that sentence?

        Maybe I’m naive in thinking we can all get along. But I do hope it’s possible.

        You hope… after the previous comment? Your argument is against software choice; you argue against the artistic/engineering process of choice creation; you argue against the basic tenet of “have the source, can improve it, can redo it entirely different, can distribute and share it”. You argue that users should ignore new ideas. You ignore that the entire C/Unix ecosystem was build on layers of forking ideas and code!

        Hiding inside your argument is a strong piece of “keep things the same, because that is what I know”.

        How can it be possible for people to get along, when you insert such snide sentences? Why bother trying to get along? There is a significant divide between people trying to advance software, and those who do system administration, Sir.

        Surprised you replaced the telnet software stack.


          Mattias Geniar Monday, February 2, 2015 at 14:58 - Reply

          I also understand you want to protect the OpenBSD ecosystem.

          A snide insertion.

          What did you hope to gain with that sentence?

          I’m not sure what’s unclear about this? OpenBSD forked several projects, because patches/feedback weren’t appreciated or approved upstream. That’s what I mean by “protecting the OpenBSD ecosystem“. The BSD community saw the fork as the only viable way, and went for it. I respect that. It paid of. There is no negativity in this remark, whatsoever.

          That’s all I meant by that “snide insertion“.

          You argue against … [snip]

          I argue against a lot of things. But not against those things you mentioned. My original post here (which I’ve had to re-read, since it’s been so long), was that I oppose the knee-jerk reaction of forking software whenever something goes wrong.

          You’ve proven that those choices for the BSD community aren’t knee-jerk reactions and are in fact, well-though processes. Good for you. But not every fork is that way, obviously.

          You ignore that the entire C/Unix ecosystem was build on layers of forking ideas and code!

          No, I don’t. The reason I’m now working with MariaDB is because of the MySQL fork. It’s a maintained fork, it has proven its merits and it’s adopted by the community. I applaud that, it kept the RDMS community alive and responsive and in result

          Hiding inside your argument is a strong piece of “keep things the same, because that is what I know”.

          As a sysadmin, my job is to manage applications (and as a result, their servers) in a secure, stable and performant way. For that reason, part of me appreciates the stability of “keeping things the same”.

          That doesn’t mean I oppose every change. I’m not married to OpenSSH, OpenSSL, MariaDB or anything else. If something better comes along, I’ll test it, automate it and support it. I’ll do what is best for my business.

          Maybe some of my intentions aren’t that well communicated, as I can’t show my emotions, my hand gestures, my facial expressions, … I’m getting the impression my messages come out the wrong way.

          In the end, I want a secure and stable server environment. I don’t really care how that happens, with 1 codebase or a hundred forks. But I am against forks just for the sake of forks. I had assumed my point would have been clear on that, by now.

          If there’s ever a chance this can be discussed person-to-person, I feel it would be a lot easier to share my state on things. We may not agree on things (which seems likely), but it was certainly never my intention to come of as snide.


      Benjamin Smith Saturday, December 27, 2014 at 22:34 - Reply

      Most people don’t understand the value of failure. But, in fact, there is almost as much value in an idea tried and failed than one that succeeded. Open Source brings the Silicon Valley ideal of “fail early, fail often” to software projects. There’s more than one right answer to a problem, and some “right answers” will be more optimized to fit some environments than others.

      Since a single project cannot, by definition, be all things to all people, to get the “most right” answer, you have to try numerous attempts. Enter the fork and/or “competing project”. DCVS is so successful in part because it trivializes the fork and allows for competing ideas to be tried in parallel.

      Since when is reducing consumer choice considered beneficial?

      More on Fail Early / Fail Often: http://www.virgin.com/entrepreneur/fail-early-fail-often


Kim Tuesday, December 23, 2014 at 00:17 - Reply

While I do agree we shouldn’t just en-masse change the applications we use because of a security issue, it’s still something you should reconsider.

Plus, when something like this happens we should invest time in a) making the buggy software better or b) support a fork that strives for a). Don’t get me wrong, don’t just fork because you don’t like the guy developing it and he said X and Y 7 years ago; try and do your best to work together. But if you see there is no working together, just fork it and make sure you keep it updated and clean up the code.


    Mattias Geniar Tuesday, December 23, 2014 at 09:47 - Reply

    You just summed it up in in paragraph. :-)

    Fork if there really is no alternative, and try to maintain it as best you can. Don’t be a dick about it, and accept upstream/downstream patches to make the software better. If you don’t have the time to maintain it (and there is no shame in that, we all have busy lives), pass on the project to someone else. Stay on board as a counsellor, guide them in the directions you like – but don’t enforce it.


Matt Clare Wednesday, January 7, 2015 at 22:30 - Reply

Totally agree with your thoughts.
But, for novices in that situation that would be understandable, if not ultimately justifiable, response. Novice’s biggest risk to their security is homogeneity and being discovered as their only reasonable expectation is be able to protect themselves from script kiddies and hide in obscurity.

Knowingly or unknowingly, people in control of systems that aren’t asking the questions you outlined have probably already conceded that they’re vulnerable to a deliberate attack.

In summary, switching technology meets a low threshold, and a lot of people choose to operate at that level.

But I’m not accusing you of preaching to the choir, just thought I’d identify the trend.


Leave a Reply

Your email address will not be published. Required fields are marked *

Inbound links