Five Features Operating Systems Should Have
Getting things to the next level...
Monday, February 28, 2005 by Frogboy | Discussion: OS Wars
My friend Eugenia over at OSNews.com was lamenting how boring operating systems have become. And I agree. How far we've fallen from the exciting times of 1991 when pre-emptive multitasking, protected applications, flat memory, and object oriented interfaces were about to be delivered to the masses.
Since then, improvements to operating systems have been incremental. Or in some cases, we've actually regressed (largely thanks to jerks taking advantage of open systems to create viruses and spyware). IBM's OS/2 was well on its way to providing an OS in which users from around the world could seamlessly integrate new functionality into the operating system via SOM and OpenDoc. Of course, had that occurred, it would have been the mother of all opportunities for spyware vendors and the creeps who make viruses. The 90s could be looked back upon as a time of naiveté and idealism. It was in that environment that ActiveX and VB Script and Internet Explorer Outlook Express were designed that we now rue because of the exploitative nature of malicious people.
And so in the past few years the two major OS vendors, Microsoft and Apple have largely taken on the role of tossing in features into the OS that third parties had already provided or that the other had managed to come up with on its own. And then after that the Linux vendors then try to mimic that (there, I've offended all 3 camps!).
With MacOS X, Apple finally managed to put together a stable operating system with preemptive multitasking and memory protection. The first release was slow and buggy but subsequent versions got better and better. MacOS X Tiger looks to be a refinement on what has come before along with Apple's usual innovative twists on existing concepts (ex: Dashboard). Apple's "innovation" with MacOS X has been very very gradual -- a far cry from the heady days of "Pink", "Taligent", and "OpenDoc". This is particularly true when one considers its ancestor, NeXTStep was released in the late 80s.
Meanwhile, Microsoft has contented itself with lifting shareware programs and throwing it into the OS. WinZip sure looks popular, let's put ZIP into the OS. Hey, AOL is annoying us, let's tweak them by making our media player skinnable and tossing that in. Hey, let's put in a basic movie editing program too. Ooh, WindowBlinds sure looks nice, let's make our OS have its own skin too. Meanwhile, in areas that there's little third party innovation (or at least competition) things haven't progressed very much. Outlook Express hasn't changed much since 1998. Internet Explorer is still roughly the same today as it was in 1998. Now before any Windows zealots get on me, I'm not saying that users haven't benefited some from Microsoft tossing in home grown (or contracted out) copies of third party programs. I like being able to work with ZIP files "natively". But bundling more content with the OS is not the same as innovation. To be fair, Microsoft's Longhorn project is very ambitious and Avalon promises to, at the very least, pave the way to really control how large things appear on our monitors without giving up resolution.
Yea yea, talk is cheap. So what should the OS vendors be doing? I can think of five things that operating system developers should look at having part of the OS.
#1 Seamless Distributed Computing
In an age of gigabit internal networks, it is striking that I can't simply tell my operating system that I have "access" to Machine A, B, and C on my network and to use their memory, their CPU, and other resources to ensure that my local computer remains always responsive. Instead Windows users are quietly bumping up against the "User Handle" limit (which most people don't even know about yet, they just discover their programs start to crash) of Windows XP. So if I felt my machine was getting slow, I'd just bump up more machines on my LAN to use or toss another machine under my desk or even (gasp) throw some tasks at my home machine via the Internet (where the OS would be smart enough on what tasks it farmed out based on the connection speed). This sort of thing should be built into the OS, today.
#2 Seamless Distributed File System Databases
To be fair to Microsoft, WinFS has the potential to become this but it keeps being delayed. I should be able to create "cabinets" on my desktop (or anywhere else) in which I set a few parameters and from then on, files show up there as if they were any other folder. I should be able to set the scope of my cabinets -- local, network, worldwide. The physical location of files should be irrelevant. I should be able to arrange things however I want without it affecting anything. Instead, move something from c: \program files\ and you're asking for trouble. Why? I should be able to organize things however I want without it affecting anything. This should be part of the operating system, today.
#3 Global User Management
When I activate Windows (or MacOS) I should be given a unique account global account where my preferences and other key settings are stored along with any data I would choose to store there for an additional fee. There should be a Bwardell.Microsoft.net (or Bwardell.Mac.com). I shouldn't be forced to use it so that the privacy nuts are kept happy. But if I choose to use it, all my preferences, email, favorites, and other system specific "stuff" could be kept there and synchronized and accessible from any machine I'm on. Moreover, it would also act as a re-director between machines. If Machine A has my spread sheets and I'm on Machine B then I would be able to get to those files through my Microsoft.net account re-directing my files from Machine A to Machine B. The system would need to be plugin-able so that Microsoft could avoid any DOJ issues. So a Yahoo or Google or even Apple could provide alternative Network cordinators (i.e. use Bwardell.Google.net instead of Microsoft.net). (DWL: I'm not talking about a domain controller here, we're talking about a global system that's seamlessly part of the OS).
#4 Universal environments
Following #3, I should be able to have a universal user environment. If I go to a public machine, type in my UserID and password, it should then be able to pull my environment from Bwardell.Microsoft.net so that my program settings, etc. are there. If the local machine is missing a program, it would simply be grayed out. But if the local machine does have it, then I would be able to launch it from the normal place in the Start menu. All my files would be accessible here through the re-direction of Microsoft.net. What machine my files are physically located on should be immaterial as long as as they are safe and secure and backed up (all handled by the OS without me messing with it).
#5 Component Based operating systems
Part of the problem with Windows and MacOS is that the OS vendors make it hard for third parties to enhance the OS with their own code. Rather than worrying about locking everyone out for security reasons, they should come up with a secure way for legitimate software developers to be able to enhance or replace components of the OS. For instance, in Windows I get 5 choices on how to view a folder (icon, tile, thumbnails, details, and list). Third parties should be able to extend this (I can think of a dozen ways I might want to display data in a folder). There should be APIs so that developers can replace major or minor components to the OS. The web browser engine, parts of the shell, the display rendering engine, the built in search, and dozens of other things. The OS should be broken down into parts that the OS vendor bundles their parts and third parties can try to innovate those areas. In that way, everyone wins, including the OS vendor (particularly closed-source vendors). Windows has "API hooking" but what we need is something more robust and secure to make use of that is well documented and accessible to developers.
The closed source vendors should be looking at Linux. It has managed to keep up on a tiny fraction of the resources. It does this through open-source. Anyone can update any part of the OS and if it's a good thing, it can be put as part of a distribution. If the major closed-source OS vendors started providing documented ways for third parties to easily extend virtually any part of the OS without a lot of pain, they could harness their considerable third party developer base. OS/2 was working in this direction with SOM where OS features could be "inherited" by app developers, extended and then updated. Object Desktop on OS/2 was coded entirely by a single individual because he could inherit the base features of the OS and then start from there. And then other developers could go from there. (DWL: I know COM can do some of this but it's not what I am talking about, if you've done any serious OS extension development, COM doesn't apply here much).
...
Now you might be thinking, "Well if you think these ideas are so great, why don't you do them?" The answer is, only the OS vendor can, as a practical matter, do this. If a third party makes these things and it's successful, it's only a matter of time (probably one version) before the OS vendor puts in one of these on their own, wiping out the "low hanging fruit" part of the market. As soon as some third party, for instance, put out a really good distributed computing product that "did it right" and started to make good business that targets consumers (DWL: Armtech is not a consumer product and isn't what I'm talking about), you could be assured that the next version of the OS would have some basic implementation of this put in. And the OS vendor's fans would chime in, "That should be part of the OS anyway!" In short, there's no business case for a third party to invest the money to develop these things because the pay-off isn't there.
But if these features were part of the OS, you could imagine how it might lead to dramatic changes in the way we use and think about computers. And to add to that, imagine the kinds of additional innovations that would present themselves if these things were already taken as a given?
## About the Author ##
Brad Wardell (aka "Frogboy") is the Designer of Stardock's Object Desktop. Object Desktop is a suite of operating system extensions for Microsoft Windows. Stardock also makes programs such as Multiplicity, Galactic Civilizations, and much more. Its home page is www.stardock.com.
"DWL" stands for "Don't write letters".
Reply #2 Monday, February 28, 2005 4:34 PM

Reply #3 Monday, February 28, 2005 4:38 PM
...I can't simply tell my operating system that I have "access" to Machine A, B, and C on my network and to use their memory, their CPU, and other resources...
I suggest looking at the OpenMOSIX project: http://openmosix.sourceforge.net/
From their website:
"Once you have installed openMosix, the nodes in the cluster start talking to one another and the cluster adapts itself to the workload. Processes originating from any one node, if that node is too busy compared to others, can migrate to any other node. openMosix continuously attempts to optimize the resource allocation."
"With openMosix' Auto Discovery, a new node can be added while the cluster is running and the cluster will automatically begin to use the new resources."
"There is no need to program applications specifically for openMosix."
#2 Seamless Distributed File System Databases
To be fair to Microsoft, WinFS has the potential to become this but it keeps being delayed. I should be able to create "cabinets" on my desktop (or anywhere else) in which I set a few parameters and from then on, files show up there as if they were any other folder.
AFS: http://www.openafs.org/
"It offers a client-server architecture for file sharing, providing location independence, scalability and transparent migration capabilities for data."
Coda: http://www.coda.cs.cmu.edu/
"Currently, Coda has several features not found elsewhere.
1. disconnected operation for mobile computing
2. is freely available under a liberal license
3. high performance through client side persistent caching
4. server replication
5. security model for authentication, encryption and access control
6. continued operation during partial network failures in server network
7. network bandwith adaptation
8. good scalability
9. well defined semantics of sharing, even in the presence of network failures"
#3 Global User Management
Wouldn't this be redundant once you started using a distributed filesystem? Nonetheless, this is one where nothing springs immediately to mind.
#4 Universal environments
If I go to a public machine, type in my UserID and password, it should then be able to pull my environment from Bwardell.Microsoft.net so that my program settings, etc. are there.
Who was it that said "The Network is the Computer"? It seems to be better way of approaching this. Do you really want your personal information being transferred onto an unknown, untrusted machine just so you can send an email or check a document? There are many thin-client options available which make this type of service quite possible. Indeed, I used to have a remote desktop accessible globally using a java enabled web browser. Sadly the company stopped offerring the service after a while.
#5 Component Based operating systems
Part of the problem with Windows and MacOS is that the OS vendors make it hard for third parties to enhance the OS with their own code.
You're right. They do. But many don't. Many go out of their way to ensure that standards are open, code is available and developers are welcome.
But if these features were part of the OS, you could imagine how it might lead to dramatic changes in the way we use and think about computers. And to add to that, imagine the kinds of additional innovations that would present themselves if these things were already taken as a given?
They are part of the OS. And that OS is GNU/Linux. Why lament the lack of functionality of large, closed, unfriendly software products when you can simply stop using them and get what you need?
Reply #4 Monday, February 28, 2005 4:59 PM
Oye, there's always a Linux advocate ready to jump in and say "Here's a cryptic, hard to set up, non-seamless solution that will, with enough sweat and pain kind of almost do part of what you want..."
Your comment almost makes me inspired enough to write "Why Linux advocates don't get it". When I say features part of the OS, I mean part of the OS, features that an end user can make use of.
Do you see an end user configuring THIS? Or downloading and installing software from sites that look like they were made in 1994?
The things I mention aren't particularly earth shattering in terms of original thought. But implementing them as part of the OS in a way designed to be used by end users is almost the entire battle.
When Redhat Linux starts bundling this stuff as part of the client makes sure that it has a friendly, easy to use, ROBUST user interface to tweak (not configure, tweak, configuration should be automatic) then we can evaluate it.
For something to be useful, it has to be implemented. We released a product today called Multiplicity (www.multplicity.net). I have no doubt that some Linux advocate is going to come on and say "But Synergy does this for free." but it really doesn't because it's all about implementation.
For instance, distributed computing: I would expect that the OS would automatically be able to look at my LAN and see which machines I have rights on. It would then have a simple, easy to use preference dialog where I could tune if - if I wanted to tune it -. The system would automatically "know" when to go and put a process (not a thread) onto an external machine looking at latency and throughput prior to doing so.
There are tons of CPU bound tasks that could be farmed out where a latency time of .2 to .3 ms (typical latency in a gigabit network) wouldn't be an issue. Some rendering packages already do this sort of thing but it's app-specific.
Reply #5 Monday, February 28, 2005 5:21 PM
Reply #6 Monday, February 28, 2005 5:38 PM
But the general direction seems to be this: a centralized server. It would be run by the ultimate server OS, which would distribute tasks among various nodes according to geography. "Buying a computer" would become obsolete: instead, you simply would buy a monitor and plug it in or wirelessly connect. Then, you would purchase the amount of resources you wanted availible. A terabyte of storage could be addes to your system in mere seconds. Ram upgrades would be a credit card number away. The simply act of logging on would transform that screen into your "home" computer.
3rd party development would be limited to owned machines until the software is reviewed and approved for the network, at which time it could be sold. I think centralization will continue to be a trend of computing until it's simply impossible to centralize further. But I doubt that will all happen in my lifetime.
Dan
Reply #7 Monday, February 28, 2005 6:04 PM
That one is ''sort of"' being done with the Cell Chip
2)
That sort of is "WinFS" but I understand what your talking about
3-4)
I think Xerox wanted to do something like that back in the 80's, right? Not sure... I also think Microsoft was headed that way with .NET architecture (I am not sure, but I think it sort of can be done with the current .NET, but not to the degree your talking about)
5)
THIS IS THE BIG ONE!!!
Never heard this one before or any type of variation. Interchnangable OS sounds like a good plan to me.
With all 5 combines you form (Voltron defender of the...) a great system. It sort of scares me through... a virus would be catastophic. Plus, they could follow us where ever we go...
*puts foil over head*
Reply #8 Monday, February 28, 2005 6:14 PM
I'm less of a Linux advocate than you might think. I am a pragmatic soul who will use the best tools for the job. I am somewhat technically inclined so GNU/Linux happens to be the tool I generally use. It used to be OS/2. It has occasionally been Windows but less and less as I now find Windows often being less user-friendly than the alternatives.
I mean part of the OS, features that an end user can make use of.
Do you see an end user configuring THIS? Or downloading and installing software from sites that look like they were made in 1994?
Maybe not. How about one of Linkthese?
Anyway, my point was not to proselytise any particular OS but to demonstrate that many, if not all, of these features already exist. Or are very close to existing. Some have come and gone. They haven’t all been wrapped up in a nice, end-user package but there’s no reason they couldn’t be - and quickly. Unless you felt that it had to be a big, Redmond or Cupertino based OS. Then you’d be waiting.
Reply #9 Monday, February 28, 2005 6:28 PM
Linux has this, MAC has that WinOS does this... if you actually can implemente these things, you have to install them and know what your doing when you do.
Once installed, it still doesn't do what he is saying PLUS, it is on different OS's. So now after you install a few proggies on WinXP, you have to merge MAC with it to get some of the other features, then put Linux on top.
Some of the stuff exists in some small form (except 5... I don't see that anywhere... someone prove me wrong).
Reply #10 Monday, February 28, 2005 6:31 PM
I would like to combine a couple of my PCs' power and experiment with it using my network maybe. USB?
Any ideas?
Reply #11 Monday, February 28, 2005 7:52 PM
Thisis the UI of OpenMosix and no, this isn't the type of "Seamless" end user stuff I'm getting at.
Reply #12 Monday, February 28, 2005 8:11 PM
Reply #13 Monday, February 28, 2005 9:10 PM
I think you're missing a point though. The UI you want to see is this:
Yep. Nothing. Not a thing. It's just there and it Just Works. You don't have to see what the loading is over your cluster. You can if you want. Ideally you just run a program and let the cluster work out where it's going to run. And that is the default behaviour.
I also think you're underestimating the user. Eye-candy is eye-candy even if you don't understand it

Reply #14 Monday, February 28, 2005 10:44 PM
Reply #15 Monday, February 28, 2005 10:45 PM
I think the point of making an operating system is to present the world with your way of managing life. If they let everyone do this, the only thing the developer is doing is making the kernel and basic plug ins for it. But I see where you're coming from. And I agree that Linux doesn't quite cut it. I mean, there is a lot of stuff out there for it, and a lot as in A LOT. But come on, after you spend a good day or two getting just the operating system core and perhaps the GUI front end adjusted to what you want, you're then faced with the daunting task of installing software. I think the fact that money rules our world, having a unified way to run your computer is what everyone needs. The point of things NOT being customizable is to keep the interface and operating system aligned. I mean, I honestly am just fine with 4 choices on where my Dock goes, not the top left, top middle left, top middle, etc etc etc etc choices that we see in certain operating systems. Linux could be great, but there needs to be unification.
And that is what these features are about. I like the idea of being able to go onto one computer and get the files from another; it's unified. Oh, and especially the idea of unified user accounts. I am really tired of having four or five computer accounts for all the computers I have, just an addition to all the sites that need passwords and user names nowadays.
The sharing of resources is also a great idea, and the current ones may be great, but once again they are impossible to set up (ok, maybe not for the linux-aholics, but for me, even though I might know a lot about computers and their operating systems, that stuff is a nightmare to me). Actually, is things follow the rumors, Tiger will supposedly have the Xgrid feature built in... Works for me...
Reply #16 Monday, February 28, 2005 11:04 PM
And I agree that Linux doesn't quite cut it. I mean, there is a lot of stuff out there for it, and a lot as in A LOT. But come on, after you spend a good day or two getting just the operating system core and perhaps the GUI front end adjusted to what you want, you're then faced with the daunting task of installing software.
Have you actually used a desktop GNU/Linux in the last couple of years? You put in a CD, you boot, you answer a couple of basic questions (like "Are you sure you want me to play nicely with that Windows partition I found or just nuke it and move on?") and you're done. Install software? How about you point at it in a list and click a mouse button. Is that really too hard?
Let's say you want to download pictures off a digital camera. Plug it in.
I went and bought a camera that generally doesn't play nicely. I didn't realise it at the time I bought it but it needs proprietary TWAIN drivers installed under Windows. It doesn't appear as removable media (Canon Ixus V2 if you're interested). It's a pain in the butt. Thankfully I don't use Windows very often - I use Ubuntu ( http://www.ubuntulinux.org/ ). I plug the camera into the USB port and my dekstop politely asks if I want to import the photos. End Of Story. No extra drivers, no stuffing around. It. Just. Works.
And It's Just Working better than Windows Just Works for a lot of people these days.
Granted it doesn't do games. Not well anyway. Personally, this isn't an issue but I realise it will put some people off. That's fine. Tools for jobs like I said earlier.
But honestly! A good day or two? I'd be suprised if it took more than an hour on most people's systems. And let's face reality - most end users don't install their OS anyway. Their PC builder does.
Reply #17 Tuesday, March 1, 2005 12:38 AM
The flash presentation explains it a lot better than I can, but the basic idea is that there's a central server with iFolder software on it, and then every computer you use has the client software on it. You get folder on your computer called "iFolder" in which you can put subdirectories if you want. Anything you put in one of the "iFolder" folders on any machine is automatically put onto the central server and then downloaded to the iFolder directory on your other machines provided you're logged in to the client software (otherwise it's downloaded when you log in). You can also access your iFolder things via a web browser where you can upload, delete, and move content and all changes you make there will be automatically made in the local iFolder directories on your machines. This way, no matter which of your machines you are using wherever you are, your files are always with you. I don't think this is exactly what you were talking about, but it's close I think.
Like I said, watch the flash presentation and it will explain it a lot better. You can ever try a free demo using 10MB of space on a Novell server somewhere so you can see how it works. I tried it, and it's impressive. It does exactly what it says it does, and is as fast as your Internet connection allows it to go.
Reply #18 Tuesday, March 1, 2005 3:45 AM
Reply #19 Tuesday, March 1, 2005 9:38 AM
Linux is great for customizability, but there is no such thing as unification in that world. Different interfaces for all my applications, different libraries, and stuff like that. Needless to say, Mac OS and Windows aren't perfect either. Still there are interface incongruities in Windows, and Mac OS runs on one type of hardware only. But for say the universal environment idea, it wouldn't go over to well if you try to use a kiosk somewhere that has a Debian flavor of Linux and uses KDE where your computer at home runs a Fedora flavor of Linux with Gnome. Despite the similar kernel, the files that would be used to make it the same look'n'feel as your home computer would be missing. That's all I'm saying.
Reply #20 Tuesday, March 1, 2005 11:09 PM
I don't mean to offend Linux users, but I think some of them are so interested in winning an advocacy debate than actually providing a solution.
That is, in my experience many Linux users are very quick to point to some poorly implemented half-solution that they don't use just so that they can say things like "See, you can already do this with Linux".
It's not just Linux users, OS/2 advocates (myself included) were the same way. Technology demos or things that don't do the job seamlessly are not real world solutions. For instance, some BeOS advocate can't just say "Well BeOS had this advanced file system that could do most of what you wanted.." Well true, it did some of this. But not most of what I'm talking about.
Let me give you an example of what I'm talking about:
A distributed file system could keep copies of my documents on many machines. Disk space is cheap. Let me access my stuff from anywhere and don't make me sweat too much about the physical location of the files. As long as they're secure (encrypted) what do I care? I shouldn't have to manually back up files in this day and age. I shouldn't have to go hunting through directories or LAN drives looking for a file.
6 months from now if I want to update the MS Word version of this article, I shouldn't have to run around to my various machines wondering which machine has it and which drive / directory I put it on. I should be able to logon to a machine, any machine, it would go to a global user account manager and I would be able to open up a "documents" folder that I made and all my documents should be there. A filtering system should be part of the folder view where I could type a couple keywords (this component created by a third party possibly and plugged in) and my article would come up. The article might be located on the other side of the world. Who cares? I then open it up, edit it, and save it and it's saved back to a machine I have priviledged access to.
When some user says "Just use NFS" or whatever I just shake my head. Linux isn't mainstream precisely because so many of its advocates (and developers) never really finish their software. There are notable exceptions but by and large, Linux developers just put enough in there so that they can say "Aha, I did it!"
It's ironic that this week's release of Multiplicity highlighted this different between implementation and tech demo because we get open source advocates saying "You can do the 'same' thing with Synergy." Well no, you actually can't and even if it id, its implementation is a perfect example of what I'm getting at. A feature that users can't really use might as well not exist.
So it doesn't matter if there's some cryptic, hard to set up program that can kind of do distributed computing that you can download and spend hours to set up. It's not an OS feature. A distributed computing feature in the OS would basically be silent. It just works. Same for all these feature suggestions. It has to be a) included with the OS and seamless.
Please login to comment and/or vote for this skin.
Welcome Guest! Please take the time to register with us.
There are many great features available to you once you register, including:
- Richer content, access to many features that are disabled for guests like commenting on the forums and downloading skins.
- Access to a great community, with a massive database of many, many areas of interest.
- Access to contests & subscription offers like exclusive emails.
- It's simple, and FREE!
Reply #1 Monday, February 28, 2005 3:33 PM
The .Mac service that you can subscribe to with Mac OS X provides some of this funcationality. While document redirection isn't an option here; user preferences, book marks, mail, address book - all of these can easily be synced between computers. Apple has released an SDK for people to build their own applications which can sync data in a similar fashion.
Now I wouldn't say this is what you are asking for, but it's a step in the right direction. I would love to see Apple open it up so that I could select any server with the right services to use as my sync host. Then I could really build the functionality I want out of the system.
On Filesystems -
Again, I guess I'll plug Mac OS X. It allows you you to access applications via their name, no need for their location. There are still some limitations, as if a user renames the application then you cannot access it via that maner. So use the bundle identifier (like package id in java - com.apple.addressbook for example). I agree I should be able to access ANY file in a similiar fashion. We'll see if the new Spotlight technology can do that. Currently tools that do indexing are pretty rough and can be pretty slow.
Many of these things I would like to see as well, but a lot of it has to do with security. If you digitally sign something, how do you know someone didn't fake it - or who do you go to get the signature. Why should I have to pay someone for me to sign my application? Yet we still need some way of regulating that...