A blog titled Shipping Seven has gotten a lot of traffic recently for their article about Windows 7 and the MinWin kernel – namely, how they’re actually one and the same. The argument offered by “Soma” is that Windows Vista’s kernel (which is what Windows 7 will be built on) is MinWin ad that it’s already on every Vista desktop out there.
Whether or not MinWin is the very same kernel that went into Vista or not is officially unknown at the moment; but what we do know is that Shipping Seven is either one huge fake, or else that the Windows core programmers at Microsoft are so stupid that they don’t know the first thing about coding, kernels, operating systems and compilers.
The post at Shipping Seven is littered from beginning to end with fallacies, lies, and incorrect deductions that anyone with even the most basic coding skills would know better than to ever post, especially not when attempting to pass it off as the work of some of the more talented coders out there.
Here are some of the more-glaring factual errors in the post that completely strip Shipping Seven of any authenticity or authority it may have on the topic of Windows 7:
How many times has the Ubuntu or Mac OS X kernel been rewritten?
Correction: OS X is powered by a rewrite of the XNU kernel which is a modified version of the Mach kernel which, in turn, is a complete rewrite of the original BSD kernel. And, of course, Ubuntu isn’t an OS in and of itself, rather it’s just a distribution of Linux.
While it can be argued that not every developer at Microsoft is expected to have intimate knowledge of the inner-workings of other operating systems, no one in their right mind would believe that the Windows kernel programmers don’t even know what kernels their strongest competitors are currently using.
We spent a boatload of time during Windows Vista making everything ‘componentizable’ – So that we could (by creating some xml files that our build process uses) create a boatload of different versions of Vista (and Server 2008).
….
You already have MinWin – It is the core system components that Windows Vista needs to function; everything else on the system depends directly or indirectly on it. It is the last thing you could (theoretically) uninstall.
So, if you really really want it, you can get it, I suppose – you probably could (using the command line) uninstall almost every single Windows Vista system component, including the user interface. I don’t know what the hell you’d do with just a kernel and a kernel loader on your machine, though.
Assuming you can get past the way that the post was written (with references like “using the command line” which indicate a general lack of knowledge about computers in general; treating the command line as if it were a “god mode” that can be used to do just about anything), there’s still the matter of factual inaccuracies – and inconsistencies in the article itself.
You can’t change/modify/revert pre-build settings by running commands in the command line. Components that are integrated at compile time simply cannot be removed by running a bunch of commands afterwards – especially not from within the resulting OS itself.
Anyone that’s ever manually compiled a Linux kernel knows this. You can’t strip ext3 support from the kernel after it’s already built any more than you can add Reiser4 support to the kernel without re-building it. As a matter of fact, anyone who’s built anything at all should know this – the same rules apply to any other program as well. For example, you can’t remove PHP support from Apache if you’ve compiled mod_php directly into the binaries.
Shipping Seven is a big, fat fraud. It’s written by someone with only the most basic knowledge of computers, zero knowledge of coding concepts, and absolutely no experience with kernels and operating systems. Shipping Seven is most likely written by the equivalent of script kiddy, eagerly awaiting the first leaked builds of Windows 7 to appease an inner itch – most likely all the while lamenting his lack of involvement in the Longhorn beta. It isn’t worth the time it takes to read, and definitely doesn’t deserve even the questionable authority it now has on the topic.
“How many times has the Ubuntu or Mac OS X kernel been rewritten?”
Good question. Rather than answer it, you went off on a some stupid rant that misses the point.
Ubuntu has a kernel. OS X has a kernel. They aren’t the same kernel. Both have been rewritten numerous times.
Bruce, we did point out the numerous complete rewrites of the OS X kernel. We had also included the (rather lengthy) history of the Linux kernel; but removed it for brevity’s sake – in our opinion, the Mac/XNU/Mach/BSD/Unix kernel argument should suffice.
I haven’t checked out Shipping Seven, so I will assume you are correct in your assessment of it being a fake.
However, a couple of notes of fact about the NT kernel that you might be discarding in your attempt to refute their claims.
NT is NOT like *nix kernels or even other MACH variants like OS X. There is a large element of simplicity to its complexity when it is broken down.
The basic NT kernel and design is also very modular, so by comparing it to the Linux kernel is going to get you into trouble.
NT is a hybrid kernel that took a fairly elegant approach to kernel design, especially for the early 90s, as it was a new conceptual kernel of the time, taking some existing kernel architectual models and some of the best kernel theories that had not been implemented before.
For example NT has some MACH conceptual ideas and lightweight kernel API set for performance, but then hands this off to increasing levels of APIs conplexity. This gave NT the ability to be very lightweight at the core, but have extended functionality that didn’t weigh the core kernel down.
NT’s HAL for example was under 64KB, and even on Vista is still 256KB(slightly larger in Vista x64), which for modern hardware is still extremely small. Going from the HAL to the lower kernel and API layers is still very small, especially compared to other OS kernel models in use today. This is how and why the MS Embedded OSes (XP/Vista) can and do work so well, as NT was simply broken apart as needed for the Embedded versions to make them very light (As in used in a router light).
The essential design of the NT kernel is both object based and a client/server model. This is not normal, or something that you will find in any other consumer level OS out there.
What the client/server model of the kernel of NT allows is the light core kernel with limited APIs secondary NT APIs and then subsystems for the actual OS clients that run on top of it. This is why even Win32/Win64 is just a subsystem that sits on NT, and can be ripped off of NT at anytime, as MS has demostrated for YEARS.
The client/server kernel design of NT is also how MS can and does include a BSD or V5 UNIX subsystem for Vista, as the UNIX subsystem is just another OS sitting on the NT kernel client/server model. The UNIX subysstem in Vista is equal to Win32/64, as they are both sitting on the NT kernel in the same way. Win32/64 only has preference as it is the the default OS subsystem. By both Win32/64 subysstem and the UNIX subsystem running together on top of NT, they both get the drivers and benefits of the NT layered kernel, but also can cross communicate via the NT kernel, and all of this happens with no emulation layers.
For people truly interested in the NT design, source code of the NT kernel can be obtained from Microsoft for academic purposes, just search http://www.microsoft.com and be a teacher of some sort.
This gives academic minded people access to and a greater understanding the NT architecture from the HAL to the API layers and subsystems that run on it.
Based on the NT design, I would have to correct you in asserting that ripping out EXT3 from Linux would be as hard as adding in Reiser as a comparison. NT and Linux kernels are VERY different in this regard. Take the recent Linux arguments about the HardLocks code that is giving Linux trouble with multi-processor granularity.
Changes to the older hardlock mechanism in Linux requires a substatial amount of work and tricks to bypass, and is why things are not going so well. With an NT design, something like this is API wrapped at a low level, and changing the mechanism would be rather simple, not require someone to sift through 9000 lines of code to change the setting.
Another example would be Vista, and Microsoft adding on a Video driver model. (Not just new drivers, but a new model of how they are handled from the kernel as well.) This was easier on NT than it would be on a Linux or even OS X kernel design because of the NT design. This is also how Microsoft was able to add in the new WDDM while still allowing the XP drivers(XPDM) to operatate on Vista if needed. That alone would be a nightmare situation for both driver and kernel level feature support if this was tried on Linux or OS X.
The WDDM not only splits the video driver to a shared kernel/user level model, but it works with the NT kernel of Vista to things no other consumer currently can do. Features like transparent GPU RAM virutalization, an OS level scheduler for the GPU, creating a pre-emptive state of GPU multitasking controlled by the OS (WDDM), and even multi-GPU processor support that works with the GPU multitasker. The WDDM in Vista allows several 3D applications on screen at the same time, and gives them all extra VRAM than what is available on the GPU hardware, and because the OS handles the multi-tasking, no applicatin can steal the GPU to halt responsiveness, but yet even with demanding games running at the same time, they only lose a few FPS running side by side compared to them running exclusively full screen.
So you can see that dropping in the WDDM to Vista was a good achievement in that it works as transparently as it does, but this would not have been possible if it weren’t for the NT achitecture that allows for massive kernel level conceptual changes by adding in new API kernel level constructs that layer on the lightweight lower level NT kernel APIs.
After taking a bit of this in, you can maybe get a glimpse of the simplicity and the complexity of the NT architecture and why the MinWin used at the demostration that started a lot of this, is no different than the internal MinWin that Microsoft has used and showcased internally for years. It is nothing more than the NT kernel at the basic level with select API layers used, which is something that can be fairly easily done with NT. And yes technically the NT kernel can be fairly small when you remove the upper level kernel level API interfaces and especially the subsystems like Win32/64 that sit on top of them.
This is why when people call for Microsoft to write NT from scratch, many OS engineers/theorists like myself scratch our heads at the level of understanding. Rewrite Win32, sure why not, rewrite NT, a sadly bad idea. NT is more advanced and useful than what people give MS credit for, especially if lightweight, extensible, fast, and modular are your design goals for a kernel.
Also a thing of note, you call BSD a kernel, it technically is a set of APIs, an ‘interface’ to a kernel not an actual kernel, and on OS X would be the API wrappers around the modified MACH system calls, not the OS X kernel.
“So, if you really really want it, you can get it, I suppose – you probably could (using the command line) uninstall almost every single Windows Vista system component, including the user interface. I don’t know what the hell you’d do with just a kernel and a kernel loader on your machine, though.”
Lolz..ya I even don’t know. As general PC user I don’t care about Kernel, I need a full-featured & Secure OS. So what’s the problem if old one can give it?
“as if it were a “god mode” that can be used to do just about anything)”
Windows is not Open Source software. You can’t do anything, minimum you can’t develop a new Windows Vista/7 from their Kernel.
Thanks for your reply, Anthony – some really interesting information there. I haven’t delved especially deep into the NT kernel’s design; but I’m not surprised to hear that it’s as elegant as you describe. As a hybrid kernel, the NT kernel has undeniable advantages over the other kernels available at the moment; but it’s really such a shame that the Win32 components that most of us live and interact with tend to hide the beauty and speed of the NT kernel.
As for the ext3 reference – the point was in having to re-compile in order to remove support for a given feature verses being able to modify it from within the resulting post-compile output — I don’t doubt that it’s a rather more different and more-complicated task than on Windows (from what I’ve seen with the IFS kit; though that does run on top of the kernel rather than as a part of it).
Anthony, thanks for injecting some real information into the conversation. For some reason the NT kernel really doesn’t get the cred it deserves.
Fake or not (my vote’s on legit), the Seven guy speaks a lot of sense, from a position of somebody who seems to know.
His point, which you aptly turned into a semantic bitchfest because he didn’t phrase concepts the way you like and said ‘Ubuntu’ instead of ‘Linux’, is simple. There is no need to rewrite the NT kernel that powers Vista and will power 7, because said kernel is absolutely good enough, and it’s been subjected over the years to the same kind of incremental improvements that the Linux or OSX kernels got anyway.
For somebody so picky about wording, your mention of ‘rewrites’ of the OSX kernel is puzzling. From all accounts, it’s pretty much the same thing since the beginning, just continously tweaked and improved upon. No major architectural changes, etc.
Seriously, the MS bashing gets old, fast. Especially from people who don’t seem to have much of a clue.
If you understood the post to be “MS bashing,” then you should probably re-read it.
Apple purchased the XNU kernel to power OS X. The XNU kernel is a hybrid kernel made up of a fusion of the Mach kernel and the BSD wrappings/API. Then they rewrote that, replacing the BSD portions with the 4.3 FreeBSD code.
And the Mach kernel that makes up the majority of what powers OS X is a complete, largely-from-scratch rewrite of BSD’s own kernel.
A kernel rewrite means just that – a rewrite of the code, and does not necessitate any architectural changes.
“You can’t change/modify/revert pre-build settings by running commands in the command line. Components that are integrated at compile time simply cannot be removed by running a bunch of commands afterwards – especially not from within the resulting OS itself”
You’re actually missing the idea. Shipping Seven was referring to the Package Manager and Component-Based Servicing APIs. Run PkgMgr.exe from a command prompt and note that it can uninstall packages. And look at the components in C:\Windows\WinSXS, the manifests in C:\Windows\WinSXS\manifests (the XML files that Seven is referring to, which describes each component and all its dependencies).
Seven is partially right in that the majority of the work to componentize Windows was already done with Vista. The component-based architecture is what allows them to create so many different SKUs, including reduced ones like Server Core.
However, with Vista, the dependencies are quite intertwined, even at the lower levels, meaning that you can never successively remove components down to the kernel like Seven says — there’s too many dependencies on upper-level components. Based on the Mark Russinovich video, my impression of MinWin is that it removes some of these upward dependencies, creating a strict line between the “kernel” and the rest of the OS.
And now that I actually read the Seven blog post in its entirety — it’s pretty much correct: no fake, no fraud, no “glaring factual errors”.
The only thing I would correct in his post is that it’s not quite right to say that “You already have MinWin” — yes, you already have the components, and the components won’t change much between Vista and 7, but there is further separation going on in under the “MinWin” term. For example Mark Russinovich talks about making MinWin be able to compile separately from the rest of the OS. That isn’t the case with Vista.
hoopskier, note:
XML configuration files for the compiler are not those that you see in the Windows\WinSXS\ folder. The pre-build XML files tell the compiler what to build and what not to; whereas those configuration files can be used to remove binaries from win32 userland.
As a casual reader of “Shipping Seven” and a seasoned computer scientist, I find nothing there that indicates it’s a fake. I’m not sure why you are so intent on discrediting “Shipping Seven” but your choice of language leaves me believing he has more credibility than you… certainly more maturity. I think all you’ve done is create more publicity for “Shipping Seven” which is good stuff because I personally would like to see more frequent posts there.
neosmart.net/blog is the real fat fraud here!
What XML configuration files “for the compiler”? Who ever said anything about the compiler? Only you did: “Components that are integrated at compile time simply cannot be removed by running a bunch of commands afterwards”
Seven’s not talking about the compiler. Read his statement: “that we could (by creating some xml files that our build process uses) create a boatload of different versions of Vista”?
He’s talking about the part of the build process that takes the various components (DLLs) that were built and assembles them into SKUs (Ultimate, Home Basic, Datacenter, etc). The SKU-building process is all driven by XML files like the very ones in the WinSXS\manifests folder. The Windows build process does more than just compile, you know; I imagine that it results in ready-to-burn DVD images for each SKU.
Good to see Anthony and others who are actually “in the know” about NT. It is apparent that shipping seven is in fact legit. I do agree the developer seems to be out of touch, but then again, most low level programers are when it comes to user level features. Look at how Mark R. describes file copying performance to see how far they are from reality.
As others have said it is a travesty that explorer.exe and now all the media related services in vista are dragging a good name through the mud. If the explorer shell were rewritten, many of these complaints about slowness would disapear overnight.
Just google for explorer.exe and threading problems to see what I mean. It all comes down to interface responsiveness and correctness. As a user I don’t care so much (well ok a little) if it takes an extra five to ten seconds to copy a few hundered meg file but damn it I should be able to start another file copy or do other related things (renaming, whatever) with the interface while it is going on and damn it to hell the time estimates had better be informative (have accurate total time est. AND individual time est. progress bars)! All these low level developers appear to be completely missing and ignoring this problem, manifested by explorer windows freezing and not re-painting, crashing, and other lame problems which have been around since win95, NT3 and 4 and up, all caused by poor threading and exception handling design.
Not that a kernel developer *should* worry about this. It’s a management and right-hand not talking to left problem.
They should look to the BeOS code to see how it should be done from the users perspective, so far the only OS that seems to have approched perfection in this area. 🙂
Sometimes the way things are said is more important than the words spoken.
Having first read the Seven article and now this NeoSmart one; I must agree with the authors of this article. The way Seven presents the facts is indicative of someone trying to look like they know what they’re talking about throwing out half-references without anything real behind them.
btw, I think from the way Seven made the post it was pretty clear that the XML files are using in the compiling process, but that’s just my opinion. Is Seven real or not? I’d definitely vote no given what I’ve just read. (Doesn’t mean I’ll stop reading them though – God, I wish Windows 7 would ship already!!)
Oh, and I think it is most likely that any Windows developer caught looking at the code from open source operating systems would probably be in more trouble than it was worth 🙂
@Henry K — Seven only mentions “build process”. Since 1) it’s quite reasonable to surmise that the build process includes more than just compilation, and 2) there are XML files that we have access to that ship with Windows and define components and SKUs, I think there’s more evidence that these XML files are what Seven is talking about. Compared with zero evidence that there are XML files used by the compiling phase. (Yet I admit there is zero evidence *against* there being XML files used by the compiling phase.)
Hoopskier, I’m not sure if you have experience with corporate software development; but (in my experience) there are two different terms:
1) Build process -> pre-compile actions and other in-the-IDE stuff.
2) Packaging/Deployment process -> sorting/converting the newly-built binary files into different packages (and SKUs).
If you look at some of the whitepapers and chat logs for Microsoft, you’ll find that they use these terms with quite the distinction between them.
Of course it’s possible that this is what Seven meant; but I don’t think you should be so hasty to disregard what NeoSmart has to say because there is a lot of logic to their conclusion.
Honestly, what gets me most about Seven’s posts is the non-technical nature of his/her post – talking about what but not how which, again only in my experience, isn’t how programmers talk.
With the present discussion and all, I think there’s another possible that we may have missed in our initial post.
It’s possible that the author of Shipping Seven is affiliated with Microsoft and Windows 7, but not in the role of a programmer. After all, there are a lot of people inside MS that have contact with the development team and know what’s going on and not all of them are necessarily programmers.
I guess that would be explain the nature of the posts & the way the kernels are described while also allowing for genuine insider info… Just a possibility though.
Henry,
Depends on the teams and terms involved.
MSBuild for example takes a Build process from Compiliation to packaging of the Installer and can be setup for several different sku outputs if you want.
It is xml files that define this process.
The Build process for Mozilla basically does the Same thing for example.
in Linux land, the build process for Linux is at the kernel level for the kernel. different distributors (ubuntu for example) will take different componets and make a distribution.
Gnome has a nightly build, have looked at the build process to much on it but from what I have read on the build process it appears it builds until packaged componet also.
I would say it is a matter of perspective and where the process is going.
Team Systems for example has a CI proess with 2008. so there is a nightly build. depending on where the build process is at. I am sure that it builds to installable (packaged packaged) product every night. Especially as the build gets close to rtm.
“Shipping Seven is a big, fat fraud.”
There are a number of people actively testing the content in your post.
“eagerly awaiting the first leaked builds of Windows 7 to appease an inner itch”
I’m not going to fulfill that itch for you, sorry.
Oh wait, you meant him.
“You can’t change/modify/revert pre-build settings by running commands in the command line. Components that are integrated at compile time simply cannot be removed by running a bunch of commands afterwards – especially not from within the resulting OS itself.”
I should probably note here that the operating system user interface in Windows Vista is simply Explorer. All things such as the scroll bars, buttons, etc. exist for the purpose of allowing a user interface to be created. Explorer does not have to run and can easily be removed from the system. The best example here would be Windows Server 2008’s Server Core SKUs. A graphical user interface is not required by any means.
Shipping Seven doesn’t sound at all like a blog written by some teenager with basic scripting skills. I don’t feel he is someone who is talking about something he doesn’t know, I feel he is something that knows, but
1) Can’t really tell everything
2) Tries to keep things as simple as possible to widen the audience range
Think of Richard Feynman: he could talk about quantum physics and make you understand a lot about it without even entering the in-deep technicisms that would render the talk incomprehensible to the “outsiders”.
The shipping seven author does just the same, he won’t tell you “by using powershell you can enter this file, remove the reference to this module so that it won’t be loaded anymore, etc. etc.”, he just vaguely waves at “stripping down the system by command line”.
This is after all the difference between a monolithic and modular kernel: as you correctly stated, you can’t remove – say – Lsupport for ReiserFS from the Linux kernel without removing it from the source and recompiling. Why? Cause it’s a monolithic kernel. Everything that has to do with it, is inside. A modular kernel allows for stuff that’s *outside* and that you can load/unload at your will. Just imagine ReiserFS support being something like… a lower level daemon, service.
Nice article anyway, shows an excellent attitude to manipulation of the simpler minds… Jobs Style! 😀
Shipping Seven doesn’t sound at all like a blog written by some teenager with basic scripting skills. I don’t feel he is someone who is talking about something he doesn’t know, I feel he is something that knows, but
1) Can’t really tell everything
2) Tries to keep things as simple as possible to widen the audience range
Think of Richard Feynman: he could talk about quantum physics and make you understand a lot about it without even entering the in-deep technicisms that would render the talk incomprehensible to the “outsiders”.
The shipping seven author does just the same, he won’t tell you “by using powershell you can enter this file, remove the reference to this module so that it won’t be loaded anymore, etc. etc.”, he just vaguely waves at “stripping down the system by command line”.
This is after all the difference between a monolithic and modular kernel: as you correctly stated, you can’t remove – say – Lsupport for ReiserFS from the Linux kernel without removing it from the source and recompiling. Why? Cause it’s a monolithic kernel. Everything that has to do with it, is inside. A modular kernel allows for stuff that’s *outside* and that you can load/unload at your will. Just imagine ReiserFS support being something like… a lower level daemon, service.
Nice article anyway, shows an excellent attitude to manipulation of the simpler minds… Jobs Style!
Your entire claim that Shipping Seven is a fraud is full of holes. Please read http://jtntech.blogspot.com/2008/06/no-that-doesnt-make-shipping-seven.html for explanation.
Josh, to me your post is just spam. You’ve just taken two points from the comments here and put them in “an article” without anything new…..
Mamoud – You are in fact the one here who speaks from ignorance.
You bring up this issue of “compile time” configuration. Well, if you understood the Windows architecture, you’d know that compile-time dependencies are not the problem. Even in the userland most of Windows is built on COM, with components at different dependency layers living in different binaries.
At the kernel level, this is no such thing as compiling the kernel “without a feature” – the kernel itself has no features. You can’t built it with or without filesystem support because is HAS no filesystem support at the microkernel level. There’s a separate binary for that. Same with virtually every component you could want to remove.
Also, the OS X kernel has never been rewritten. Since the first version, OS X 10.0, Apple has not rewritten the kernel. It’s pretty clear that this is what Seven was referring to.
Perhaps in the future you should do some research before displaying your ignorance for the world to see?
The Shipping Seven dude meant ‘replace with something else or heavily modify’ when he said ‘rewrite’. It’s obvious that all the production kernels routinely have big chunks of code modified over time. Yet Linux is still Linux, OSX is still Mach + BSD + IOKit, and XP-Vista-7 is still NT.
But it seems that our friend here likes arguing semantics more than making sense. Face it buddy, you went way over the line with a clueless rant, and people are calling you out on it. Hey, it’s no problem, happens to everyone. Funnily enough you even seem somewhat sincere in your debunking mission, instead of merely trolling for page hits, and I don’t know what’s worse.
You’re right – There is always a lot of FUD surrounding beta versions of Windows and the community doesn’t need any more questionable sources of info clouding up the facts surrounding the next Windows release. FUD really hurt Windows Vista (not that it’s a flawless OS or anything) and we saw several problems with Shipping Seven that we felt the need to point out for no reason other than to raise the awareness in the community about a possible fraud.
I’ve already admitted in my last comment that perhaps the author of Shipping Seven is actually involved in the Windows 7 program, just not as a developer. Given the genuine concerns about the authenticity/quality of the facts in that post and the (plenty) of room for multiple (mis-)interpretations (on purpose?) I’m not convinced Shipping Seven is on the up-and-up; but I guess anything is possible.
I guess only time will tell who’s telling the truth and who’s not; hopefully either way Windows 7 will be a good improvement.
Henry K., my post wasn’t derived from the comments on this page, and I made more than two points.
Soma (Seven) sounds like Just Another Developer. Almost certainly not a kernel developer.
He doesn’t know everything, he explains his understanding of things, and he’s frequently overgeneralized or just plain wrong.
But this post starts off on a rant, and then uses the example of a file system driver? Are you kidding? Blows up about the use of a “command line” because it’s “godlike”? So the ignorance of this poster attacking what he perceives as the ignorance of another poster now contributes to the overal general flurry of crap around MinWin.
I’ve heard that in linux systems, “compiling” the kernel produces a monolithic kernel, so you literally have to recompile it to get changes in feature sets.
With NT, the kernel is the very core-most part of the system, and then kernel *mode* components (that live in kernel *space*) can be added and removed, chopped and changed, just by adding or removing entries in the registry. File system drivers are included – the same kernel bits can boot from FAT or NTFS (or whatever file system driver you have installed), without a recompile.
Using “the command line” is necessary because the GUI doesn’t expose every component, probably to avoid users blowing their legs off.
Hi to all, I recieve the rss and was tempted to see what I found here… So I tell you what I think about.
In the entire and tiring post I see a big lack of knowledge about unix and also of the making of windows vista. Whoever sometimes has got any of the beta series 4000 of longhorn and have the visual studio can tell you what can be done and what can’t be by recompiling parts of that system (just to be clear, having the code from MS being a beta tester), including the real winfs feature (actually absent in vista) running, and pretty well to me.
The working and actual official vista system was ripped of many of the best solutions in the longhorn betas, maybe was too much to give…? Time will tell, but modularity nowadays is a hard matter… I prefer to think that they (MS) give us back all that they rip and then we can talk about “modularity”, at least working with the MS developing tools…
About what Anthony said, I must to note that the unix api is no other thing than a compatibility api… If someone without knowledge in programming reads that, maybe can be inducted to confusion, as well as we know that unix is not only the posix – like directory tree or the cde user interface, it’s a kernel, and by now (i.e., the sco uniware) with almost 10.000.000 lines of code… And the last linux kernel is about that number, too.
Finally, I see that talk about what anyone does at its IDE miss the point of Shipping Seven entirely. I recommend to read the old Andy Tannembaum’s book about OSes again.
He is laughing on you (http://shippingseven.blogspot.com/2008/06/best-blog-post-comment-ive-read-today.html).
You can rip out support for ext3 without recompiling if you’ve chosen to compile ext3 support as a module.
Yes, Jason… Martin, I don’t know why, but I leave a response before and it wasn’t submited, my answer was simply: He could say what he want, it’s a free country! 🙂