IPv6 and Security: The Threat From Version 4
The Internet has run out of space!
Well, not really. The problem is that way back when the Internet was created they figured that addresses in the billions would last for a very, very long time. It’s similar to the idea that a hard drive measured in hundreds of megabytes was a big drive…and it was…back then. Unfortunately, they never considered that their military project would go mainstream…once it did, there were other problems that demanded more immediate attention. Back then only if every person on Earth wanted an address would there be a problem…but this was before the plethora of devices that could connect to the Internet.
IPv6 has a lot more address space. Where IPv4 had almost 4.3 billion addresses, IPv6 has something on the order of 2128 possible addresses. It is a huge number and will keep us happy in our myriad of devices for some time to come. It’s more secure, it’ll make it more difficult for the bad guys, etc. It isn’t all gumdrops and roses however. While they are both Internet Protocols, IPv6 cannot understand IPv4. It’s like an Australian businessman and a Chinese businessman sitting at a bar trying to talk to each other in the absence of their translators. They both have their own native language and each language has words and phonemes but to each other’s ears the sounds don’t make any sense.
“You basically need to translate Version 6 to Version 4 and we can do that by encapsulation,” Herberger explained to CRN. “And the encapsulation standards are all over the map. This situation causes problems with security inspections because if I can send an attack that exploits Version 4 vulnerabilities through a Version 6 inspection module, I’ve got a pretty high chance of success because the Version 6 inspection module will not be able to read it. And we haven’t been able to resolve this problem yet.
To put it another way, the Version 4 exploits would be effectively carried as a passenger through a security screen geared towards IPv6. Read more…
It would seem that the easy route would be to make IPv6 understand IPv4, right? IMHO, doing that would be a stopgap and once the need to understand IPv4 was removed, all that code would still be within the standard implementation. Yes, you can make a patch that basically says to ignore something or another but for clean code you would have to remove all of the translation functionality…and that means thousands of lines of code for a standard implementation…what about a custom implementation? Even ignored code is still read by the processor…it simply isn’t processed. That takes time. Time a CPU spends reading something that it won’t process is a collection of wasted cycles. Since we can now feel moments shorter than a second the last thing we want is to wait an extra 5-10 seconds for a page to resolve.
They may simply have to add another program whose sole purpose is to read encapsulated code. Is it annoying? Yes…but a lot easier to remove than something actually built into the standard. They will also have to make encapsulation standards so that the reader programs don’t have the Australian-Mandarin translation problem. Or they could simply let it go Wild West and let IPv4 people take their chances until they upgrade. I think, though, that would be extremely unpleasant.
Cybersecurity Experts Eye Self-Correcting Network to Thwart Hackers
It sucks to be “vulnerable.” There is only two feelings worse than “vulnerable” and they are “violated” and “helpless.” Of course, “violated” is what usually follows a long enough period of “vulnerable” so the two are strongly related. One of the really bad things about being vulnerable is that you tend to feel it only after you have been victimized the first time. It is not one of those feelings that gradually creeps up on you like fear; vulnerability hits you all at once and in full force.
IMHO, this is the situation with computers: we are all vulnerable, period. No matter what we use, if enough people look for them, vulnerabilities can be found within any system. Sometimes they are small annoyances; other times they are just short of full-blown disasters (and on rare occasions they actually are full-blown disasters) but they are all caused by vulnerabilities. The code that leaves the network software manufacturer works (usually) but under unusual circumstances (i.e. not something that a normal user would do) it can do something unexpected since they can’t test for every eventuality. Unlike a living system, a cybernetic system cannot mutate its function in response to a threat…at least not yet.
The term moving-target defense — a subarea of adaptive security in the cybersecurity field — was first coined around 2008, although similar concepts have been proposed and studied since the early 2000s. The idea behind moving-target defense in the context of computer networks is to create a computer network that is no longer static in its configuration. Instead, as a way to thwart cyber attackers, the network automatically and periodically randomizes its configuration through various methods — such as changing the addresses of software applications on the network; switching between instances of the applications; and changing the location of critical system data. Read more…
The idea is that paths of vulnerability, or vectors, are basically open doorways once their exploits are known until the software is patched. This is due to the fact that the software’s purpose is to function…not to test or repair itself. The only way the software could fix its own vectors is if there was a subroutine that basically ran attack scenarios until it found a weakness and then reported the weakness to another subroutine which had the purpose of closing vectors. This would be a serious hit on performance as not only would the program have to perform its primary function of running the network but it would also have to be attacking itself and fixing itself all at the same time. Instead of creating a psychotic program, the idea of the self-correcting network is much like moving through a maze that constantly changes its pattern: if you have permission from the central functions then your data will pass through the maze as if it wasn’t there. If you are trying to sneak in using a known vulnerability you will be thwarted as the internal pathway will not be standard; each network will be different.
IMHO, a self-correcting network is similar to a living organism: no single disease will wipe out all life. Even within a single species there will always be a family line that for some reason or another stops the disease cold. This is due to the fact that while the basic platform is the same, each instance of life is actually unique. There are minute differences in the way each organism does things and sometimes it can be too much for a disease to conquer. The disease can mutate (influenza anyone?) to create new vectors but in doing so may actually cause itself to be less effective with the old vectors.
Computers have far less variation than living organisms, but they have speed. Software reconfiguration, even at its slowest, is magnitudes faster than life…and that is what the self-correcting network is doing: reconfiguring software. It is not actually moving physical parts…just how to get to them. Doing so in a manner that is not predictable to an outside viewer? That is the trick.
Orangutans at Miami zoo use iPads to communicate
The tablet computer — the most popular of which is the iPad — has taken the modern world by storm. There is no excessively complex learning curve, the device is on and usable immediately after pressing the power switch, and it doesn’t have any weird writing implements (stylus?) that you can lose which then makes the device pretty much useless. You use your fingers with modern tablets since they have those marvelous touch-screens…like our smartphones. The interesting thing is that a tablet will respond to the touch of any finger — or appendage — that fulfills the activation requirements of its screen by changing the display. As long as being using the tablet can understand that touching in a certain place on the glass will actually do something, the device can allow an expansion of the visual horizon otherwise not possible.
While Jacobs and other trainers have developed strong relationships with the orangutans, the iPad and other touchscreen computers offer an opportunity for them to communicate with people not trained in their sign language.
“It would just be such a wonderful bridge to have,” Jacobs said. “So that other people could really appreciate them.”
Orangutans are extremely intelligent but limited by their physical inability to talk, she said. Read more…
I feel that there is a slight — but intentional — obfuscation of an uncomfortable truth here. Yes, you can teach a dog, or a cat, or a bird to respond in specific ways to specific stimuli. Every now and then they will surprise you with something new. But how comfortable would you be if you got home one day and found that your Abyssinian cat had written on your iPad “That girl who smells like strawberries and stays the night sometimes left something for you. I did not open it. I really, really wanted to. Can we open it now that you are home?”
Putting an unintelligent animal in a cage can always be justified by claiming that the animal will do better under supervision. It will be fed, and cared for, and protected from predators. Wonderful, right? An animal that can communicate, however, becomes a “he” or a “she” and then you begin to wonder “what’s he thinking about right now?” Well, since they can communicate, you just have to ask…be prepared for the answer, though. The more information they can access, the more they can learn. As it is, what’s the real difference between putting an orangutan in a cage and putting a mute member of a newly discovered tribe in an identical cage and giving them both iPads? The fact that we can’t produce offspring with the orangutan?
There are many who believe that there are three major forms of life on this planet: plant, animal, and human. I wonder where, for them, animal ends and human begins?
Magnetic bacteria may help build future bio-computers
The average computer is a collection of a thousands of discrete parts that when constructed and programmed properly creates a general purpose machine capable of …well…pretty much anything. Many of the discrete parts are themselves constructed of even smaller discrete parts; even a simple IC chip has thousands of individual circuits contained within it. A CPU or a GPU has millions of individual circuits contained within it. The odd thing is that despite the complexity inherent in the computer it is nothing compared to one single bacterium.
Even the smallest bacterium (less than a micrometer in length) can reproduce. Could a computer create a duplicate of itself? Sure…given the parts, the tools necessary to connect those parts, a way to manipulate the tools, an energy source to power the manipulators, a place to actually do it, and the programming to actually be able to do anything at all. A bacterium on the other hand needs only food and a place to live — usually something like a drop of water — and it can churn out exact replicas of itself until the food is gone or there isn’t any more space to expand.
To make a computer process information at a faster rate we can either bump up the operating frequency or we can make the circuitry smaller. Bumping up the operating frequency is a stopgap solution at best; it requires more power which creates more heat, and the speed can only be increased so far before the circuits burn out. Making circuitry smaller actually decreases power consumption since signals don’t have to travel as far to get to their destinations or you can increase the power to the former level and get faster responses since more signals will be getting to their destinations in the same amount of time. Mobile devices are especially vulnerable to power usage so making circuitry smaller allows them to have snappy performance while using less power.
Unfortunately, making circuitry smaller runs into a problem very quickly: manipulator mismatch. Imagine that you have a splinter in your big toe and it’s really painful. You don’t have long fingernails but you have something better: tweezers. It’s no problem removing that plank from your big toe with tweezers. Now imagine that you have a painful splinter in your big toe but instead of tweezers you have boxing gloves…that splinter is gonna hurt for a long time. The problem is similar with computers; we are rapidly approaching the point where the circuits we wish to create are smaller than we can focus the wavelengths of light needed to create them. Our “tweezers” are starting to be too big.
“We are quickly reaching the limits of traditional electronic manufacturing as computer components get smaller,” said lead researcher Dr Sarah Staniland of the University of Leeds.
“The machines we’ve traditionally used to build them are clumsy at such small scales.
“Nature has provided us with the perfect tool to [deal with] this problem.” Read more…
Nature routinely works with the very small: from one single cell an entire human being (or even a Blue whale) can be created. Following the programming in the encapsulated DNA within it, the cell divides, multiplies, and differentiates until a living entity is created.
If computers could be grown they would be less expensive and far more damage-tolerant; they could — theoretically at least — repair themselves independent of the operating system…much like your body repairs itself independent of conscious thought. All the computer would need is a growth medium and the safe cubbyhole where you store it. Circuitry could mimic biological processes and enjoy self-arrangement and perhaps even augment its power by using the light in the room or sunlight (the ultimate “green” computer).
There are two potential problems with biological circuitry IMHO: if it’s alive, it can die and a computer virus could be something you catch from your computer…or your computer actually catches from you. They could add preservatives to the systems but the bottom line is that organic material decomposes; that’s great so long as you manage to transfer your data to another system before it does. The computer virus thing is different…and if there is one thing that Nature has used to hit us over the head it’s the fact that once life gets a foothold it is very difficult to eradicate. Here’s hoping that the bio-computer under your desk isn’t planning to latch onto your leg when you least expect it.
A first: Hacked sites with Android drive-by download malware
A drive-by is a terrible thing. In meatspace, you could be walking down the street minding your own business, and have the unfortunate timing to be in front of the wrong house at the wrong time. Out of seemingly nowhere a car literally “drives-by” with the windows open and bullets exiting it like water from a fire hose. Accuracy is not necessary; if enough bullets are shot at the target then the target will be hit regardless of position. Unfortunately, this lack of accuracy in a drive-by is the second most lethal cause of extensive collateral damage (the first being committing the drive-by in the first place) to persons who have nothing to do with the target.
A drive-by in cyberspace is an attack on you where you don’t have to do anything to get attacked. Usually, you have to actually click on a link which takes you to an infected page that has the payload…a drive-by actually loads itself without having to click on anything; all you have to do is view the page. Since they load in the background, you may not notice anything is amiss until it’s too late.
Mobile devices usually require special implementations so that they can browse the web…or at least they did. As the devices get closer to full computers (there are already dual-core models and quad-cores are on the way) it will be easier for them to process instructions just like their bigger siblings…but they have little protection for the time-being. Running programs in the background require less power than those actually writing to the screen, but they still require power…and power is still the Achilles’ Heel of mobile devices. A good anti-virus would require constant power and may not even keep you safe.
Android and iOS are the two major players in the spotlight right now. Apple is seen as the one to beat, but Apple is a very tightly closed ecosystem. To compete, Android adopted a far more open ecosystem…but the unintended effect is not only an easier way to find a vulnerability, but a lot more people with the incentive to find them. “If at first you don’t succeed…” is very applicable in this case.
Although Android lets you download and install apps from anywhere, in addition to the official Google Play store, this attack still has two requirements.
First of all, the Android device has to have sideloading on (the “Unknown sources” setting has to be enabled) or this won’t work. Secondly, when the suspicious app finishes downloading automatically, the device will prompt the user to install it. Read more…
This, IMHO, is a test. It is not a true drive-by in that it still requires the victim to allow it to proceed. However, the real problem is the fact that it comes up on a legit site that has been hacked; chances are that something like “Update.apk” on a legit Android site might not seem so outrageous to allow. That is the way most malware wants you to think, “It makes sense for this to be here so it must be okay.”
An easy way to get around this one is not to have “Unknown sources” enabled and not to install anything you weren’t expecting. That means anything at all that you did not specifically request does not get permission to install. As for the other half of the requirements, it may be a good idea to enable “Unknown sources” on a site-by-site basis; it’s annoying but it’s less likely that you’ll get hit without your knowledge.
Tablets will be most users’ main computing device, Forrester says
The modern personal computer, IMHO, came onto the scene around the mid to late-70s with the Commodore and the Apple. Before then, computers were weird machines that seemed to be basically giant calculators that had a bizarre set of lights attached to them that were supposed to mean something to the user. The Commodore and the Apple made the computers accessible to more contemporary people who didn’t understand binary, octal, or hexadecimal. They enabled (or at least made familiar) the idea of a typewriter keyboard as input and a television (cathode ray tube) as a display. Just as the keyboard/mouse/screen combination made computers more personable in the 20th century, the touchscreen/portable device/Internet combination is making the 21st century version of the PC not seem like a PC at all.
Gillett defines a tablet as having touchscreen capability, weighing less than 1.75 pounds with a 7-in. to 14-in. screen, an eight-hour battery life and always-on operation.
Even while tablet sales grow, they will “only partially cannibalize PCs,” Gillett wrote. “Eventually, tablets will slow laptop sales, but increase sales of desktop PCs.” Many information workers will still need conventional PCs for creative work that requires big processing power or a large display, he said. Read more…
I believe that Mr. Gillett is overlooking something. Tablets are great, but you can’t necessarily take them everywhere. There is another device which is almost as good as a tablet, and you can take it pretty much everywhere: a smartphone. While it’s true that watching movies on a smartphone is not the best experience, it’s good enough to get the idea if you need to see something right this second. There is also the benefit of being able to make calls and to put the device into your pocket when you need to “bounce.” Smartphones will also have significant horsepower within their small frames which will allow them to be used as terminals for cloud computing services just like tablets.
I believe that the tablet will be a major computing device…for those times when you are not at home but are sitting down. They are not really that good for moving around because their large screens which make them desirable also make them somewhat awkward to hold. Even the special covers with handles don’t address the fact that their weight — which is much less than a standard laptop — is still supported all the way out at the end of your arm. You also can’t comfortably slip it into a pocket and go…you need a special sleeve or at least a bag. Still, once you get to where you’re going a tablet is a great temporary substitute for your full-power rig.
IMHO, if cloud computing takes off at full speed then smartphones will be just as useful for access as a tablet; a control to move viewing from your mobile device to an Internet-capable big screen shouldn’t be very large. For reading books or magazines at a cafe or your friend’s house the tablet will reign; when you want to go out or you need something quickly, your smartphone will always be right at hand.