This article was originally published in The Wall Street Journal on January 7, 2018.

The first cybersecurity crisis of 2018 landed with a stinging blow late last week. Word got out that security researchers had discovered significant security flaws in the microchips, made by several manufacturers, that power virtually every modern device. Hackers could theoretically use those vulnerabilities, affixed with the ominous monikers “Meltdown” and “Spectre,” to suck vital information out of devices ranging from gigantic business servers to the phone in your pocket.

Naturally, panic has ensued. Some dire warnings suggest that the necessary security fixes will slow down machines to speeds not seen since the days of the dial-up modem. Others think the flaws are built so deeply into their devices that the only solution is to rip out and replace the affected ones, at a cost of hundreds of billions.

Stop. Take a deep breath.

These flaws, while newly discovered, are anything but new. Discovering and addressing such problems is a normal part of the technology life cycle.

The “Meltdown” and “Spectre” story is actually good news inasmuch as it reflects an increasing consciousness of cybersecurity. The industry has increasingly robust processes to minimize the chaos hackers can sow using such flaws—and last week’s news shows they’re working.

To understand this silver lining, start by realizing every technology has some flaw or vulnerability. Even after extensive time and money is spent “debugging” devices or software before release, computing experts have calculated that up to 1% of the underpinning code will contain errors.

The knowledge that these flaws exist has created a race between security researchers and hackers. The challenge is that while the rate of vulnerabilities has remained stable or even dropped over the past 20 years, their absolute number has grown dramatically as devices have become inexorably more complex.

In the 1970s, engineers had to write some 400,000 lines of code to make the space shuttle fly. At a 1% error rate, that’s 4,000 possible security holes. Today’s typical smartphone is powered by 10 million lines of code, and the average motor vehicle by 100 million. Hackers know this well, which is why they spend hours combing through the code to find some of those 100,000 or one million errors. Once they find a flaw they can use to steal information or money, they launch a computer virus to do it.

Recognizing what hackers are up to, security researchers also are constantly looking for flaws. That is one of the positive aspects of the “Meltdown” and “Spectre” story that has been significantly—and unfortunately—played down: The good guys figured this one out before the criminals did.

Security experts world-wide have been working for at least six months to fix these flaws and protect vulnerable components. Many critical systems already have had their defenses adjusted to minimize the chance of a successful attack. The Department of Homeland Security played its part too by quickly leaning in and asking the private sector what it needed as well as providing a central repository of information on how to mitigate this threat. Much work remains to be done, but we appear to be a few steps ahead of the hackers.

How the chip problems will ultimately be addressed will take some time to hash out. The fixes will surely take many forms, ranging from patching software to layering in more cyberdefenses. Bad ideas, such as proposed laws to tighten cybersecurity requirements or punish hacking victims if they failed to fix the flaws “reasonably,” will surely also bubble up. Thankfully, at least at the federal level, these ideas probably will be swatted down quickly.

Most important is maintaining the constant state of alert and responsiveness that brought “Meltdown” and “Spectre” to attention. Should we fail to keep our virtual guards up, then we may unfortunately learn about the next major cyber weakness from the bad guys.