Monthly Archives: May 2011

Password Analysis Measures Typing Speed and Rhythm

A password is only as good as its user allows it to be. Common words or specific dates, while they can be complex, make sense to anyone and thus don’t make good passwords. Nonsense words or collections of numbers, symbols and upper & lowercase letters make for much better passwords…provided they aren’t written down right next to the computer because they’re nonsense and/or complex collections of numbers, symbols, and upper & lowercase letters. The most awesome password possible becomes the gateway to the system if its user puts it in an easily located cubby.

With the rise of phishing attacks and keystroke logging, it’s clear that even the most complicated and frequently updated password is not totally secure. A new approach from researchers at American University of Beirut analyzes more than just the characters of passwords—it also verifies the speed and rhythm at which they are typed.

Key-pattern analysis (KPA)—the examination of typing beat and speed—can also measure the force with which a user taps keys. This type of analysis creates a biometric profile of the user. Even if the right numbers, letters and symbols are entered, if the biometric profile does not match, the password will fail. Read more…

KPA doesn’t have the drawback of being out of parameters if you happen to have a cold as voice print systems can be sensitive. Those little thumb print sensors are not the best in the world (since they do have to be included on a laptop that normal people are expected to be able to buy) and might be defeated with a bit of determination. Facial recognition cameras aren’t 3D (yet) so they too can be defeated with a bit of determination. Using typing speed and rhythm is a good idea as cadence between two different people can be vastly different doing the exact same thing. Let’s just hope that they have some way around gathering a sample of typing other than typing the password 10-20 times when you set up. That would help with the system, but it would be a real bother for the person typing…and it would give a wider sample for the person attempting to break in to resemble.

Hasselblad’s 200-megapixel camera: $45,000

That is one awesome camera. I do wonder what it is that costs so much, but I understand that the goal of equaling film stock is an ever-moving target. Not all that long ago, a 9-megapixel camera was supposed to be the digital equivalent of 35mm film; since many phones now have 5-megapixel cameras as snapshot functionality, I presume the bar had to be raised to make the dramatically higher prices of full cameras still viable.

The camera actually uses a sensor with a mere 50 megapixels, but with Hasselblad’s multishot technology combines six shots into one. That means moving subjects such as fashion models need not apply. But a lot of this very high-end photography involves static subjects such as jewelry, watches, cars, and paintings for reproduction. Read more…

This sounds roughly like what your eyes do. For instance, if you close one eye and look at something, you see color, shadow, and shape. It isn’t until you look at something with both eyes that depth literally enters the picture. When you think about it, our eyes aren’t really separated by much space…however just the fact that they’re separated gives us depth perception. This camera takes a photo, moves its retinal equivalent, takes another photo, moves its retina again, so on and so forth until it has 6 frames of the same subject but slightly different positions. Then it combines the 6 shots into one. You end up with far more subtle information in the gestalt than you would have with the single photo…even if it was a single 200-megapixel sensor camera.

Automotive Black Boxes, Minus the Gray Area

Event Data Recorders (EDRs) are the vehicle electronic equivalent of an airplane black box. Just like the airplane version, they record certain parameters of the vehicle in question just before, during, and sometimes just after an accident. They record neither voices nor sounds in the rider area as they duplicate the functions of the Flight Data Recorder of an aircraft rather than the Cockpit Voice Recorder. The positions of accelerator, brake, clutch (if applicable), speed, perhaps GPS and deceleration data, and a few other data points are recorded (as long as the device itself survives) for download.

Automakers have long installed electronic data recorders in their automobiles, and the National Highway Traffic Safety Administration has since late 2006 required automakers to tell consumers about the devices. That federal rule also outlines what information is recorded and stipulates that it be used to increase vehicle safety.

Now the National Highway Traffic Safety Administration is considering a proposal that would “expand the availability and future utility of EDR data” — in other words, a possible requirement that all automobiles have the devices. The proposal is expected sometime this year. A separate discussion would outline exactly what data would be collected. Read more…

Right now, EDRs are not required by law (like airbags) for manufacturers to include in their new vehicles. There are some manufacturers that do include them. For instance, GM’s OnStar system has an EDR attached and can “phone home” as it were with information about airbag deployment, and vehicle location. This is great if you happen to be incapacitated or your mobile flew out of the window during the departure from controlled driving. This might not be so great if it becomes a nanny guard and reports on your every use of the vehicle.

There is a possibility that the mandatory inclusion of an EDR in new vehicles may not come up all sugar and roses. I think that if the EDRs are made mandatory then a way for law enforcement to easily interrogate them will be implemented. Right now, there aren’t any real standards. Making standards will no doubt standardize the output format so that there will no longer be proprietary interface equipment needed for access…just a standard interface. On the face of it, a standard interface means that those fender benders where the vehicles are not destroyed will have an extra witness to the accident that isn’t colored by fear or greed, or confused due to a knock on the head. The bad side is cops making a routine traffic stop and interrogating your black box just as they give a look see over your car. Give cops the easy power to do it and they probably will. They did it in Michigan.

I’m not saying that EDRs are a bad thing; far from it, IMHO they are a good thing just like the ones on aircraft. The more data you have on a crash, the better you cam solve what happened. My problem comes from the misuse of the data. True, it hasn’t been misused yet, but I have faith that it will be. If you couple GPS data with time you can get speed at a certain place and a certain time…it’s not too much space on a memory device to log things like this. Just think of the iPhone keeping a log of everywhere it’s been for an entire year without seriously impacting the use of the phone. A memory device in a car can hold a lot more.

Duke Nukem Forever has gone gold

*URK* Bubble, bubble, bloop! (Me on the floor trying to make my brain work again.)

How is this possible? I was under the impression that this would NEVER (Yes, as in Not EVER!) be finished. No way whatsoever. Don’t get me wrong…I enjoyed the previous iterations of  Duke Nukem but I felt that this one was simply not going to be able to catch up with present technology. Huh, seems I was wrong.

“Duke Nukem Forever is the game that was once thought to be unshipppable, and yet here we are, on the precipice of history,” said Christoph Hartmann, president of 2K. “Today marks an amazing day in the annals of gaming lore, the day where the legend of Duke Nukem Forever is finally complete and it takes that final step towards becoming a reality.” Sure, it took us 15 years or so to get here, but the important thing is that the game is coming out. It’s happening. Read more…

This is gonna have to be one hell of a game. If it isn’t an absolutely awe-inspiring leap of optical and auditory excellence, they are totally doomed. It took them 15 years to get to this point. A video game in development for 15 years has run through not only several different video game engines, but also different versions of the engines. Each version usually includes enhancements and extensions, but sometimes have exclusions to make something or another run or render faster to get the frame rate up. Sometimes changes to the code are necessary to optimize frame rate but are definitely required for an enhancement and/or switch to a new game engine. To switch to a new engine most likely means a total rewrite. While it would be nice to keep it on a familiar engine, the main problem with Development Hell is that new methods and techniques arise that then everyone expects to be included; without the new stuff, the old stuff can’t compete.  Thus, the cycle continues ad infinitum or until the developers actually retire the project.

Well, apparently they think they have the game as it is due to be released June 14, 2011. We shall see.

Many browsers run insecure plug-ins, analysis finds

Still. Locking the barn after the horse has been stolen is useless…locking the barn door while the horse is still inside but leaving the key on a hook outside isn’t all that useful either. Yes, there are some places in the U.S. where you can leave your house unlocked and drive away for the day without worry of someone stealing your stuff, but the Internet is global and thus not subject to local niceties or conventions. Qualys has a browser checking tool which will shuffle through your browsers plug-ins if you give it permission to do so…after downloading the tool itself.

The most vulnerable pug-in was Java, installed on 80 percent of browsers, 40 percent of which were running an out-of-date version of the software open to exploits. Adobe Reader took second spot, also installed on 80 percent of browsers, just over 30 percent of which were vulnerable.

A commonly-cited worry, Flash video, was vulnerable on a more modest 20 percent of browsers despite being present in more than 95 percent of them. Other video players such as Shockwave and Quicktime showed vulnerability levels of between 20-25 percent but were installed on only around 40 percent of browsers.

Overall, around 80 percent of browser-related security flaws now lie with plug-ins and only 20 percent with browsers, regardless of which browser was looked at. Read more…

Browsers are what we use to look at the Internet. Just like any other window (no pun intended) they can get dirty and accumulate gunk on them. The dirtier they are, the more dirt they seem to accumulate…at an ever increasing pace. Dirt can also hide tiny cracks which, if left unfixed, can one day blossom into yawning chasms seemingly on their own with no external force applied. Nice, clean windows can show tiny problems at a glance so we can get them fixed quickly.  Don’t forget, while you may know what you put on your computer, those people (family) to whom you give Internet help may not realize what’s in their browser, or is a new program. When they need help, they will come running to the one they trust…and they won’t hear the “Why, Why, WHY?”

We are getting better with the big stuff, but the little stuff can still hurt us. In this instance, you really do need  “to sweat the little stuff.”

World’s Smallest 3-D Printer Could Find Its Way Into Your Home

3D printers are awesome. The idea that, with the right kind of medium, you could print anything brings to mind the Star Trek replicator…or at least a more primitive version that puts us on the road to the real one. Right now we can only make resin replicas with anything you can have at home (unless your name is Bill Gates or Steve Jobs) but laser printers used to be expensive too; now they’re giving them away as free gifts.

The university claims this is the world’s smallest 3-D printer, designed to print with a special kind of synthetic resin that instantly and precisely hardens when hit with an intense beam of light. That gives it the ability to print very intricate as well as very sturdy objects. It uses a focused beam of light, hardening layers only a twentieth of a millimeter thick, which is delicate enough that the university says it can be used to print finicky objects like hearing aid parts. Read more…

While this is a valuable breakthrough, I see three problems with this implementation: the medium, the inability to make moving parts, and the size of the printer. First, resin is not usually a very strong material and while they can make some interesting shapes in it, they usually won’t bear a lot of load so you probably wont be able to print a table or a chair. Second, the method of printing makes moving parts problematic unless the gadget can be hand assembled and finished as rotating parts absolutely hate square edges. Third, unless the printer is intended to skate across the surface of a large resin pool, anything larger than well, a breadbox, won’t be printable as a monolithic block. If you have a design that can be made in sections that slot together then you can print it, otherwise…

Even with the limitations, it’s a start…and a good one at that. There are a lot of instances where simply having a 3D representation of something is enough to decide if you really want it or not. With a little judicious application of paint, no one will be the wiser…unless they pick it up.

Cloud computing’s big surprise

Sci Fi writers and some visionaries would have us believe that in the future, all computing power will be in the cloud and accessible from everywhere using any device. The Internet is similar to that except it’s mostly communication and there are still discreet areas that contribute to the whole; there is no real country-wide processing system that can be tapped into simply by plugging in.

The evolution of the electric grid was presented as the clearest parallel to cloud computing. However, many types of industrial processes are more efficient at large scale; backyard steel production doesn’t work out well.

But something funny happened on the way to the cloud. Many applications, especially those used by consumers and smaller businesses, did indeed start shifting to public cloud providers like Google. However, with some exceptions, the trend in large organizations is something quite different. As I’ve written previously, the idea of there being a “Big Switch” in the sense of all computing shifting to a handful of mega-service providers is, at a minimum, overstated.

In part, this is because computing is a lot more complicated than electricity. The electrons powering a motor don’t have privacy and security considerations. The electrons encoding a Social Security number in a data center do. Plenty of other technical and governance concerns also conspire to make computing less utility-like. Read more…

Electricity was pretty much fundamentally dissected in the 1820’s and about 60 years later, it was being generated by central stations. Computers came about in the 1940’s and have just started to explore the availability as a utility role. However, unlike electricity, there is no computing grid…at least not for ordinary people. For real cloud computing to take off, it has to be absolutely private and totally secure: no matter what resource is computing the program, it must be accessible only by you or those to whom you have given access…NO ONE ELSE. The only way to really do that now is to physically possess the computers that are running the cloud…which negates the whole idea of a real cloud: if you need more power than you have then the only option is to buy more computers which will be expensively idle when you no longer need their extra horsepower.

IMHO, what is needed is a system where your programs and data follow you and basically install themselves at the physically nearest server farm, just like electrical power. A backup of all your data will have to be stored somewhere else in case of local failure. You can have portable servers, like portable generators, in areas that have little connectivity and urban areas will have much larger farms than rural areas simply due to the different living densities involved. The system will have to be based on non-volatile memory so if there is a power failure your status will be preserved until the system comes back up. There should be a way to take a snapshot of your present status for those of the more paranoid temperament.

Even with the stuff I mentioned, there is still a lot of trust in the provider involved. Unlike electricity, code is not the same from person to person and it can be quite lucrative to snag another;s data. Perhaps a biometric reader that requires a living key might be the solution. It’s either that, or we’re going to need the other thing that Sci Fi writers and visionaries often discuss: a truly ubiquitous government surveillance system

Creating Apps to Fuel Growth

GasBuddy.com was a good idea. Considering that gas today is over $4 a gallon, finding the least expensive station closest to you would be ideal…however, finding one less expensive but further away is almost as good. The only thing is that it had to be easy to use. This is crowdsourcing at its best, but in order for it to be effective, it needed to be updatable from a mobile application.

Then, in 2009, the partners realized the site’s limitations could be corrected with a series of mobile apps. A user could plug in pricing information, or search it, from his or her phone. So the company launched Android and iPhone apps later that year, and they were instantly popular. (At one point, the Android app rose as high as No. 2 on the most popular list for the entire Google Android app store, behind only a ribald game app called Ow My Balls!) Read more…

Bingo! With the addition of mobile, a good idea turns into a great resource. While smartphones and tablets are making true Internet access ubiquitous, their smaller screens and lack of mouse hover make accessing a standard website problematic at best. Feature phones and Blackberrys typically have even smaller screens which makes the problem even more acute. A version of a site optimized for mobile makes access and usability much easier and in the case of GasBuddy made the site far more reliable and useful. It’s amazing how a small modification can have such wide-reaching consequences.

A device turns anything into a touchscreen, allows you to paint in midair

This is a really sweet piece of equipment. I could see tinkerers initially making this into a TV controller or, if they’re really brave, a NERF golf simulator.

It works with a special kind of optical force field: A frame is set up to beam thousands of light beams through it. So when a hand, fingers, or any body part, enters the optical force field – the device knows something is interfering with it. The device operates with 256 infrared sensors and 32 LEDs – and hooks up to a computer through a USB connection. Read more…

All we need now are the gloves to simulate a keyboard touch and we have the basis for the Mass Effect Haptic Adaptive Interface. Painting and other stuff in the air is nice but there needs to be a way to feel what you’re doing. Anyone who has attempted to play a theremin knows the difficulty of using a zero-contact interface. What makes it general purpose is that without a frame of reference, control is a matter of positional memory; unfortunately, the positions will change for every different application. If the interface icons are large and few in number, the changing positions aren’t much of a bother; something like a specialized keyboard will need much greater spatial precision.

White House unveils global cyberspace strategy

So, does this mean that things like Stuxnet may become part of the cyberspace arsenal? I suspect that it and many other worms are already in a heavily classified zoo of automated warriors that will be successfully unleashed upon even a suspecting enemy.

The White House called for increased cooporation among law enforcement agencies worldwide in fighting cybercrime and promised a robust response to “those who would seek to disrupt networks and systems.”

“When warranted, the United States will respond to hostile acts in cyberspace as we would to any other threat to our country,” it said.

“We reserve the right to use all necessary means — diplomatic, informational, military, and economic — as appropriate and consistent with applicable international law, in order to defend our nation, our allies, our partners, and our interests,” it added. Read more…

I wonder if cybernetic infections will soon have their own place at the CDC? There is equipment that will soon be Internet enabled and typically is used to keep people alive; if one of these instruments catches a virus, the outcome could be really, really bad.

Anyway, the strategy clearly states that an attack in cyberspace is the same as an attack in real space.  You have to wonder: can an attack in cyberspace lead to a retaliation in real space? I would say, yes.