Monthly Archives: April 2012

Tablets will be most users’ main computing device, Forrester says

The modern personal computer, IMHO, came onto the scene around the mid to late-70s with the Commodore and the Apple. Before then, computers were weird machines that seemed to be basically giant calculators that had a bizarre set of lights attached to them that were supposed to mean something to the user. The Commodore and the Apple made the computers accessible to more contemporary people who didn’t understand binary, octal, or hexadecimal. They enabled (or at least made familiar) the idea of a typewriter keyboard as input and a television (cathode ray tube) as a display. Just as the keyboard/mouse/screen combination made computers more personable in the 20th century, the touchscreen/portable device/Internet combination is making the 21st century version of the PC not seem like a PC at all.

Gillett defines a tablet as having touchscreen capability, weighing less than 1.75 pounds with a 7-in. to 14-in. screen, an eight-hour battery life and always-on operation.

Even while tablet sales grow, they will “only partially cannibalize PCs,” Gillett wrote. “Eventually, tablets will slow laptop sales, but increase sales of desktop PCs.” Many information workers will still need conventional PCs for creative work that requires big processing power or a large display, he said. Read more…

I believe that Mr. Gillett is overlooking something. Tablets are great, but you can’t necessarily take them everywhere. There is another device which is almost as good as a tablet, and you can take it pretty much everywhere: a smartphone. While it’s true that watching movies on a smartphone is not the best experience, it’s good enough to get the idea if you need to see something right this second. There is also the benefit of being able to make calls and to put the device into your pocket when you need to “bounce.” Smartphones will also have significant horsepower within their small frames which will allow them to be used as terminals for cloud computing services just like tablets.

I believe that the tablet will be a major computing device…for those times when you are not at home but are sitting down. They are not really that good for moving around because their large screens which make them desirable also make them somewhat awkward to hold. Even the special covers with handles don’t address the fact that their weight — which is much less than a standard laptop — is still supported all the way out at the end of your arm. You also can’t comfortably slip it into a pocket and go…you need a special sleeve or at least a bag. Still, once you get to where you’re going a tablet is a great temporary substitute for your full-power rig.

IMHO, if cloud computing takes off at full speed then smartphones will be just as useful for access as a tablet; a control to move viewing from your mobile device to an Internet-capable big screen shouldn’t be very large. For reading books or magazines at a cafe or your friend’s house the tablet will reign; when you want to go out or you need something quickly, your smartphone will always be right at hand.

Penguin Thieves Panic, Face Charges After Facebook Post

Is it a sign of the times that incidents like this keep happening? I don’t mean the penguin-napping (snatching any docile but technically wild creature) or the swimming with dolphins after hours (better hope they’re in a good mood as they’re faster, stronger, and unlike us can actually “see” in totally dark water) or even videotaping their shenanigans. I mean the inevitable posting on Facebook of what they knew with every fiber of their being was absolutely illegal to do. At first, they got away with the perpetration so it could be called a prank; getting caught raised the action to the level of at least misdemeanor. A prank is something you secretly smile about but a misdemeanor puts you on the watch lists of law enforcement around the country.

“Can’t believe … penguin in our apartment man … we stole a penguin,” one of the men says in a video, apparently taken after the hung-over trio woke up.

The men — ages 18, 20, and 21 — apparently got drunk, broke in to Sea World after hours, and stripped down to their underwear to swim with the park’s dolphins, 7News reports. But then some other creatures caught their eye. Read more…

Since this happened in Australia they won’t also be charged with underage drinking, contributing to the delinquency of a minor, inappropriate behavior, public nudity, etc. Had this happened in the United States this incident would most likely have been labeled a felony and the three of them would have a nice prison stay to anticipate. They also would be liable for the repair of whatever security they bypassed to get into the park in the first place, the cleaning of the dolphin tank, the cleaning of the penguin tank, and the therapy cost of re-acclimating Dirk to his safe home. There, just as it is here, no one would have known “whodunit” unless someone in the crew blabbed…at least before Facebook arrived.

Social media has made people rich, poor, accompanied, lonely, passive, combative, voyeuristic, exibitionistic, etc. Social media has also been a cornucopia for law enforcement or anyone looking for information on someone else. Things that people would never think of saying in a public place like a bar e.g., “I sugared Mr. Miller’s gas tank,” they would happily post on their social media page. People also seem to be quite comfortable posting crimes to their social media pages as well: how many times have you heard of the person claiming some disability or another and then posting evidence to their social media pages that they were outright lying? It has happened and no doubt will continue to happen.

IMHO, the vast majority of humans like to share so much that any opportunity to do so is taken with glee. This is not a bad thing. The problem arises with the choice of venue. You can say something to someone and unless they write it down or what you did was so mind-boggling that it was impossible to forget, they most likely would forget in a relatively short period of time. (There is an old game called Telephone which demonstrates this fact beautifully.) The Internet has changed this assumption. Now, anything that you release onto the Internet is pretty much out there forever…and you have no real control over where it goes and who sees it. You may limit your posts to your friends and family, but what about them? Whom do they allow to see their stuff? Do you know everyone they know? Are you sure? How many of them have cop “friends?”

If you are going post something to the Internet, first act as if you were going to shout it through a megaphone at Grandpa’s WWII Survivors Reunion or Grandma’s Bridge Club. If the idea of doing so makes you uncomfortable, then perhaps you shouldn’t post it online where not only your Grandma might see it but also the Grandma of that cutie you’re trying to chat up…or law enforcement trying to solve a crime for which they didn’t have any leads before your post.

Tim Berners-Lee: demand your data from Google and Facebook

There is no doubt that there is a lot of information about us out there on the Internet. From the gated communities of iTunes and Facebook to the Wild West of Google et al., sites are looking at what you do while you’re there, what you look at, what you look for, how long you stay, sometimes even where you came from and where you go when you leave among other things. They have our likes, dislikes, lists of wishes, and in some cases lists of what we already have (like GameStop asking what games you already possess) are desired as well.

Other companies have so much information on us, but strangely we don’t actually have the same comprehensive information on ourselves. It’s becoming easier to request this information, but data from one place rarely synchronizes with data from another place. Companies that are not affiliated with each other have different ways of doing the same thing and — like OS-whatever and Windows-whatever — don’t really like playing with each other. And if program incompatibilities weren’t enough, it doesn’t seem — to me at least — to be in the best interests of the companies that have information on us to make it clearly readable to us. Perhaps it’s an effort to make requesting our information tedious or, more likely, they don’t want us to use the information they spent time gathering somewhere they won’t reap benefits.

Berners-Lee, the British born MIT professor who invented the web three decades ago, says that while there has been an explosion of public data made available in recent years, individuals have not yet understood the value to them of the personal data held about them by different web companies.

In an interview with the Guardian, Berners-Lee said: “My computer has a great understanding of my state of fitness, of the things I’m eating, of the places I’m at. My phone understands from being in my pocket how much exercise I’ve been getting and how many stairs I’ve been walking up and so on.” Read more…

Yes, there are many discreet sensors collecting, correlating, and communicating data about us all the time. Not too long ago, we had to jump through hoops to opt-out of Internet hoovering…now  (for the most part anyway) a simple click can usually allow us to avoid any more collection from one specific place. The downside to the simple opt-out seems to be that if we should mistakenly not opt-out when we first have the chance then we do have to jump through some hoops to do so later.

I personally think it’s a good idea to have our data; we should have a comprehensive electronic pattern of who we are. This electronic DNA (e-DNA) could help us with a lot of things in cyberspace and meatspace as well. Of course, we would need two major breakthroughs to make e-DNA work: first, we need a standard format to which the information must adhere and second, we need a secure way to share what we want, with whom we want, when we want to do it. We cannot simply give our e-DNA to a company and expect them to use only what they actually need to fulfill our requests; companies now collect far more information about us than is needed. Think of this: why do companies request our birthdays to verify our ages? Is it that kids are so poor at math today that they will automatically type in their real birthdays other than a date which satisfies the requirements? No, I believe that the intent is to get information on you that they can then cross reference with information they got from an affiliate. The more information they get on you, the better they can advertise to you.

The day is coming when all our information from credit history and medical records to Internet search patterns/data storage and friends/family will be contained within one data silo. There will be multiple copies of this silo but, unlike true meatspace copies, our data silos will be linked together and when one is updated they all will be updated. The only real problem is how to make it work. Either there will have to be a master data repository (or repositories) which houses our silos, or we will have to have some pretty sophisticated computers physically located within our homes that house our silos. The master data silos will either be government, or be accessible to government…both a little unnerving. The home servers, to give us the warm fuzzies, would have to be able to run something like IBM Watson to keep access to our data on need-to-know status at all times and actively discourage hacking attempts, but they would alleviate the concern that one giant company (or the government) has all our data…for a little while. As I have said many times, anything let out on the web once is out there forever; there is no sure-fire way of erasing every instance of a particular piece of data. There is presently nothing to keep companies from copying the data you have given them permission to view except their word…and words can be manipulated to say literally anything.

Tupac Shakur resurrected as ultra-realistic “hologram” for live performance


WARNING!! Video is NSFW!

Wow. When I first saw Star Wars (what is now referred to as Star Wars: A New Hope) two things really stuck with me: the Millennium Falcon, and the holographic projection of Princess Leia asking for help. I knew it was nonsense since in order for us to see it we either have to be looking directly into the beam, or it’s reflecting off of something. That knowledge in no way lessened the cool factor of the image; I had seen holograms before and knew that color holograms were not too far off. I also realized that since Obi-Wan and Luke were sitting in one place while the recording played that the 3D effect could be mimicked by a sophisticated computer system (R2-D2) which only had to create the effect for a very narrow viewing angle. There was still the problem of making an image coalesce in plain air, though…but hey, they were far more advanced than we were at the time — even though it was a long, long time ago in a galaxy far, far away.

Princess Leia was a recording; later in the series, we were introduced to the holographic projection of the Emperor actually speaking to Darth Vader. I didn’t see why something like that would be worth the processing overhead instead of simply using a screen, but once again it was cool to see. There wasn’t a lot of interaction with the hologram other than subservience, but there was interaction.

If I remember correctly, it really wasn’t until Attack of the Clones (officially Star Wars Episode II: Attack of the Clones)  that the usefulness of a hologram over that of a flat screen became evident. Attack is where they showed the hexapod (octopod?) mobile hologram emitters that could walk with someone and allow a conversation as if the person was actually present. We do have those rounding robots with the flat screen heads, but how much more awesome would seeing a hologram that had some depth to it be? Could you imagine a young Star Trek fan in the hospital having their physician show up as an Emergency Medical Hologram in a Starfleet uniform? Perhaps a Harry Potter fan could have their physician show up dressed as Dumbledore (0r even Snape)?

The amazing realism of the projection has already reignited the rumors of Tupac still being alive (nevermind the fact that he hasn’t aged a day, he’s a bit see-through, and his feet noticeably slide all over the stage like he’s on an ice rink). AV Concepts has also stated that the technology could be used to digitally revive almost any deceased performer, from Michael Jackson to Freddie Mercury, though there is quite a bit of debate over whether this should become a regular practice. Read more…

Of course, the performance wasn’t really “live”. However, even though there’s still a ways to go it was very impressive. As long as the screen can remain hidden, your suspension of disbelief can easily be achieved. I wonder how far they will go with this though. CGI is rapidly approaching the point of visual reality…remember the T-Rex from Jurassic Park? That was in 1993 — almost 20 years ago. Now, we have CGI crowds (Titanic?) that move well enough for us to pretty much accept them as living beings behaving the way living beings would under the circumstances.

I gather that, depending on whether or not they can get permission to do so, there will be more digitally resurrected people in our future. Perhaps they could have an artist sing a duet with themselves or even have an entire choir of themselves. Making an artist perform one of their songs is no doubt a challenge, but the (public) personality of the artist is already contained within the recording. What would happen if they wanted to resurrect someone for whom we don’t have any recordings like George Washington or Abraham Lincoln? As we are well aware, that first impression is crucial; how imposing would Darth Vader have been if he had been voiced by Justin Beiber?

There is one other thing I think would be a perfect but somewhat creepy match: this technology with a version of IBM Watson. Of course, Watson would have to acquire as much information on the person to be resurrected as possible, but knowing everything about one thing should be far easier than knowing a lot about everything…and not being able to look anything up on the Internet. Actually being able to use the Internet, or to call a reference colleague in real time would transform teaching. Imagine physics class getting a visit from Sir Isaac Newton or hearing about the Civil War in history class from Abraham Lincoln…how much more interesting would those classes be?

How big a security risk is Java? Can you really quit using it?

There was once a time when the OS of a home computer was actually housed on a ROM (Read Only Memory) or PROM (Programmable Read Only Memory) chip physically located on the motherboard. Needless to say, boot up was nearly instantaneous but making improvements or bug fixes to the OS was a bear at the very least. With a ROM you had to replace the chip…period. With some of the later iterations like the EEPROM (Electronically Erasable Programmable Read Only Memory) you could erase and reprogram the chip while it was still in the circuit. It was still a bear…but much less of a bear than needing to procure a new chip.

These old systems typically had proprietary programming code to go along with them. A program written for one manufacturer (Activision) would not work on another (Commodore) because the OS was intimately attached to the hardware; you did not buy a computer and then choose an OS to put on it. That is not to say that there weren’t tinkerers; it’s just that tinkerers were locked to a platform and tinkering was strictly limited to languages used by the system (BASIC, COBOL, FORTRAN, etc.). The only way to mess with the OS was to burn your own chip.

The modern version of a home computer is rather different. There are basically only two types of  home computer: those that use Apple hardware and those that use something else. However, due to the need to make updates and patch holes the new version of the OS is all software; it is no longer “set in silicon” as it used to be. (The exceptions of course are companies using read only thumb drives as re-installation media.)

As there are significant functional differences between iOS, MacOS, Windows, Linux, etc. it was rather difficult to write a program that would run on all of them without writing it separately for each of them. What was needed was a standard language that they all understood regardless of their internal configuration; a language that was powerful and flexible. Java fit this bill. As long as the virtual machine for each OS was up to date, code written in pure Java would run. Initially this was a good thing as the OS with the largest reach was coming under attack; rewriting code to target only the other OS flavors would have left a significant portion of the population out of the loop. Everyone watched as the OS under siege weathered the storm; the other OS flavors tightened ranks (or pointed fingers) and secretly reveled in the knowledge that they weren’t the ones under the withering barrage. They forgot one thing in their relief: while they could seek and plug vulnerabilities in their own code, they were forbidden to do the same for code that had permission to run on their systems but was not created by them.

That’s the problem with exploits that target vulnerabilities in cross-platform runtimes like Flash Player and the Java Runtime Engine (JRE). Even if your operating system is fully up to date, an unpatched vulnerability in that third-party code can lead to havoc.

As the Mac community discovered, a user can go to a perfectly legitimate site, be infected with absolutely no warning, and have untrusted code running on the box. That infection typically includes a component that can download additional malware later, also without warning. Read more…

Imagine three pets: Spot, Rover, and Ruffy. Spot is a Great Dane who has a real doghouse in a shady part of the lawn; Ruffy is a friendly but agoraphobic housecat who cannot go outside; Rover is a Jack Russel Terrier who plays with both. One day, Spot gets fleas. The next day, Ruffy has fleas. How does Ruffy get fleas if she never goes out? Simple. She got them from cuddling with Rover after he spent time with Spot…at least, that’s what their owners thought. As it turns out, the veterinarian informed them that the fleas their pets had were not indigenous to the area…and the timeline of the infection indicated that Spot was infected second. Their Patient Zero was Rover, who roamed the neighborhood playing with the neighborhood kids.

Since Rover was very friendly, he approached anyone willing to scratch him behind the ear…especially those nice people in the white coats who would give him dog treats at the same time every day and brush his fur with the funny brush with the tube on it.

An easy way to fix the problem is to lock Rover up…but then the neighborhood kids come to your house to play with him thus crowding your house. You could be mean and tell the kids not to come around but then very few people in the neighborhood would be willing to help you. There is also the possibility of trusting Rover to someone…but then they can teach him all sorts of bad habits. If only you knew where he got the fleas in the first place, but since he can’t talk and the tracker you had on him didn’t take pictures…

Is Java a risk? Yes. Is surfing the web a risk if you avoid the seedy places? Yes. If you have your OS fully patched and your antivirus up to date and you avoid the seedy places is surfing the web still a risk? Yes. You could disable Java…but you could also disable images to make your browser faster. That may make things a little… monochromatic?

This is one of those annoying decisions where the pros and cons are specific to you. Chances are that whatever needs Java to be enabled will be the most likely vector so turning it off probably doesn’t afford any protection other than the perception of protection. Having heard about being hit by lightning you can stay indoors whenever you hear thunder…but remember that the first strike will hit without warning since light is so much faster than sound.

New Mac malware epidemic exploits weaknesses in Apple ecosystem

I must admit to a little confusion: many Mac users (and seemingly Apple itself) do not consider a Mac to be a PC, which IMHO stands for Personal Computer. It seems that for many, only a Wintel box is a PC and a Mac is…a Mac. Therefore, if you want to be safe from viruses and malware and trojans and whatever, you need to get a Mac rather than a PC, right?

Umm, wrong.

What makes this outbreak especially chilling is that the owners of infected Macs didn’t have to fall for social engineering, give away their administrative password, or do something stupid. All they had to do was visit a web page using a Mac that had a current version of Java installed. Read more…

There is a common misconception that an OS can actually be inherently immune to viruses; that a virus can only attack one kind of system. IMHO this is a fallacy, and a very dangerous one.

What many don’t seem to understand is that vulnerabilities are not actually “holes” in an OS; they are not open gateways that hackers find and enter. Vulnerabilities are basically certain situations which, if exploited, can compromise a system. Granted, some problems can be attributed to sloppy coding, but others are unusual situations that usually wouldn’t crop up without a very peculiar set of circumstances. For instance, consider your house keys; with them, your home and all of your possessions are vulnerable to theft or destruction. If a thief got your house keys, you’d be toast, right? Well, is your address conveniently stamped on your keys? Is whether or not you have an electronic home monitoring system also stamped on them? Is the code for the system stamped on them too? Is the fact that you have a vigilant neighborhood watch in effect located with your keys? With enough information, having your house keys equals loss of your stuff. With enough information any security can be defeated…the question is whether or not making the effort to defeat the security is worth it.

Checking for vulnerabilities is not cut-and-dry. The numbers of lines of code for a modern OS runs into at least the tens of millions. If you have four numbers — I mean 0, 1, 2, 3 — there are 3,334 different numerical combinations you can make. Add just one more number, and you can make 4,445 different combinations. You might think that for a computer that is not too many numbers and you’d be correct. However, we aren’t talking about simple numbers…we’re talking about lines of code which have to be executed before they can be summed with the results of other processes to see if there is a problem. It would be great to be able to make code bulletproof but to do so requires more lines of code, which means more processing, which means more CPU overhead, which means bloat…and we all know what that means. Generally, then, the OS is stress tested under abnormal conditions and passed if it succeeds. It sounds bad but “perfect” is not required to get software out the door; all they need is “good enough” and it’s good to go.

I say all this to say that Apple has been safe for some time because it just wasn’t worth the effort to find the security holes. Apple’s market share was boutique and not mainstream. This has changed in recent times and now Apple is in a bit of a bind: they need good practices and anti-viral systems but their advertising and their general attitude has been that only PCs were affected by malware. Apple has been marketed as the computer “that just works” so much of the installed user-base is not really computer-savvy because they haven’t had to be. Now they’ll have to catch up with the Wintel users and get used to the bad stuff.

By the way, I’m sure there are many saying “well, what about Linux or Unix?” I have a theory about that. IMHO, most of the viruses and malware are specifically targeted against OSes that force you to conform and pay lots of money for the privilege to use them. Also, it’s far more difficult to sneak something by the millions who write Linux code than the hundreds or maybe thousands who write proprietary OSes. It is also really, really bad to defecate where you eat. I have no proof on that…it’s just a hypothesis.

UK plans to monitor all online comms are “waste of money”

The United Kingdom is already generally regarded as the country with the highest surveillance rate in the world. There are cameras pretty much everywhere (if there are bathroom cameras they are very well hidden and not acknowledged) so people are pretty much used to them. Citizens are not particularly fond of the cameras, but they are tolerated…as a result most of the cameras seem to be obvious.

There are some things to take into account when dealing with surveillance cameras, though. For instance, there is a difference between focal view and field-of-view. While the focal view of a camera might be the register and front door of a specific store, the field-of-view can encompass the walkway in front of the store and even some of the parking lot. It’s just like your eyes: what you focus upon frequently has quite a bit of peripheral information included with it. You may be watching a great movie on the television, but if a DVD under the TV moves seemingly by itself, you will notice it rather easily. Supposedly the UK government is planning to monitor the Internet within its borders and will only pay attention to what it focuses upon and nothing else.

The government said that police and security services need to be able to monitor all online communications “to investigate serious crime and terrorism and to protect the public”.

Monitoring on this scale will require UK internet service providers to install systems to intercept and analyse internet traffic using deep packet inspection (DPI), according to Cambridge University security researcher Dr Richard Clayton.

DPI can be used to monitor everything a person does online, from the web pages they visit to the messages they send to their friends. However the government said it plans to implement systems that will only monitor “communications data”, such as who a person talked to online and when, and not the message content. Read more…

There is an old saying, “The best laid plans often go awry.” There is an even more appropriate saying, “The road to Hell is paved with good intentions.” I can see this starting out as monitoring only “communications data” but I don’t see it ending there. Pretty soon someone in the hierarchy will decide that the information would be a bit more helpful if the “communications data” had a subject attached to it. Then they wouldn’t have to worry about uselessly following Mohammed Abadar who talks only about golf when Mohammed Elbadden always talks about vigorously exothermic reactions. There is, of course, a possibility that Professor Elbadden works at the Energetic Materials Research and Testing Center or perhaps Alford Technologies but without knowing the context of the subject within the communication it would be difficult to know if  Professor Elbadden needs to be flagged. As storage space is pretty cheap now it will most likely only get cheaper in the future so the storage overhead will be negligible. There is also the chance that Barrister John Carter, who communicates from a specific location to the same address at the same time every night could be planning something…or hiding the fact that he has a mistress from his wife. A subject, some context, and perhaps a comparison with previous messages would make classification far more effective…

A little information can go a long way, but more information can go even further…and what government willingly throws away information?