The is an app that can trick the "server check".
ale5000 - 12 01 09 - 17:08
Actually, the main differences between NT4 Workstation and Server were... Two (actually, one redundant to prevent run-time modification) registry settings. Starting with NT5, differences such as kernel compilation options and installed applications (Terminal Server wasn't available in windows 2000 Pro, but was there in the Server version up) started appearing.
Windows 2003 Server was better for gaming than any version of Windows 2000 essentially because, although 2k3 was geared towards a server load, Windows 2000 sucked at ACPI management, including virtual IRQs, and sound cards generate a LOT of interruptions - making 2k3 better than 2k for sound in general; as for XP, well, 2k3 came out at a time when gaming was still taking 98 and 2k into account because both OSes performance was better than XP's out of the box (using single core CPUs and less than 512 Mb of RAM).
Right now though, gaming-wise, Windows 2000 has become hard to deal with (see IRQ problems for sound cards mentioned above), 98 isn't supported anymore and can't deal with current hardware (installing it on partitions larger than 8 Gb is... Risky, and it can't deal with more than 512 Mb of RAM without some config file kung-fu), so XP is the baseline. With SP2 (and 3) backporting so much 2k3 goodness and desktop hardware being 4 times more powerful than when 2k3 came out, gaming on a server Windows OS (2k3 is fine because hardware has gotten powerful enough to hide away most of its limitations, but 2k8 is another matter) has lost its main appeal (increased stability and performance) making its problems with gaming more prominent (less forgiving for shoddy programming, lowered preemption).
I don't know about Windows preemptive model, but if another OS kernel like Linux is any indication, stuff like no/voluntary/preemptible/kernel lock preemption will have a huge effect on how fast/slow a machine will react to a given load (sound in gaming will strongly depend upon that setting, as will most server tasks, even on very powerful hardware: it's noticeable).
Mitch 74 (link) - 13 01 09 - 02:33
This has been an ongoing discussion on the 2cpu.com forums for quite some time.
Before addressing the "My Win2kX Server rig won't run XYZ Desktop App" questions, I always ask "Why are you running a Server OS?" (I do this because it may be relevant to tracking down their problem, and partly to satisfy my own curiosity)
The conersation then usually takes one of a few paths:
A) "I have an MSDN subscription and wanted to play around with it...."
B) "Ummm... I'm not really Sure."
"Okay, if you can't tell me why you are running a server OS can you tell me why you aren't
running Windows XP?"
"Stop screwing around with a server OS and install XP for pete's sake."
C) "Server is so much better at..."
"No it isn't. It is exactly the same as XP"
"But... but... but... It's better at..."
"No. It isn't. That's why you have to reconfigure X for it to work at all."
"But, I heard that the Server version is..."
"You heard wrong. The server version is better at doing server tasks."
"But... but... but..."
"Do you have more than two sockets on your motherboard?"
"Do you need to address more than 4GB of ram on a 32bit OS?"
"Stop screwing around with a server OS and install XP, for pete's sake."
I always used the following analogy: "Tweaking" a server OS to run games is like buying a limo, and cutting out the middle part and welding it back together so it will get better gas mileage and be eaiser to park.
Murdock (link) - 13 01 09 - 03:07
"Windows 2000 sucked at ACPI management, including virtual IRQs"
Yep, and one factor in this was whether you were using the good old 8259 PIC or the newer APIC. In fact I think that was one of the reason why MS began promoting using the APIC even in uniprocessor PCs, helped by the fact that by the time MS began to do this, most chipset vendors had already integrated the I/O APIC into the southbridge (one southbridge of that era that still didn't have an I/O APIC was ALi's M1535(D+) southbridge) and also most CPU vendors integrated the local APIC into the CPU (Intel did this long ago with the Pentium, AMD only did this with the Athlon (K5s and K6s used a different OpenAPIC that no chipset supported), VIA was late with only the C7 and later having the LAPIC). Win2000 on the ACPI/PIC combination stacked all the PCI interrupts. WinXP aimed to changed this, but because of ACPI BIOS bugs, it was only changed if you were using an Intel chipset. Non-Intel chipsets still stacked interrupts like Win2000. In contrast, with the IO-APIC, it was all moot as it put the PCI interrupts on IRQs over 15.
More on this:
Yuhong Bao - 13 01 09 - 13:55
You guys are giving me bad flashbacks to the days when I had to write my own sound card interrupt handler.
The last time I had to deal with interrupt stacking was actually in Altirra, when I was still writing the internal kernel. The 6502 only has two interrupt levels, IRQ and NMI, and the Atari 800 stacks three interrupts on the NMI line and EIGHT on the IRQ. The IRQ is a pain to deal with because two of them are performance critical -- serial in/out -- but annoyingly they aren't on bits 7 and 6 like the DLI and VBI are on the NMI side. As a result, the kernel has to burn a noticeable number of clocks per interrupt on a 1.79MHz CPU just doing interrupt decoding. Granted, it's a bit less of a pain to do bit position based table dispatch on an x86, but I've heard that interrupts are proportionately much more expensive on modern CPUs due to cold caches, TLBs, etc.
Phaeron - 14 01 09 - 00:09
If it was any indication, on a 1 GHz Duron processor running on a Via KT266a chipset, all COM and LPT ports disabled and a SB PCI128 sound card (hardware-accelerated mixing, but no other acceleration), under Windows 2000 Pro SP3, running the ACPI kernel then the "standard PC" kernel would bring (on top of much more stability) something like 10% (!) extra performance in 3D benchmarks that used sound. The delta was much lower on purely graphic benchmarks.
Mitch 74 (link) - 14 01 09 - 06:06
"In contrast, with the IO-APIC, it was all moot as it put the PCI interrupts on IRQs over 15."
That assuming the southbridge is PIIX3 or later. Early APIC systems had only 16 interrupts just like the 8259.
Yuhong Bao - 19 01 09 - 01:54
Some of you guys need to double check your facts re. Windows 2000. I run Win2K SP4 on an Intel 865GRH motherboard in ACPI mode with a P4 2.6GHz Hyperthread CPU and my PCI IRQs are just fine:
(PCI) 16 Intel 82801EB USB Host Controller
(PCI) 16 Intel 82801EB USB Host Controller
(PCI) 17 RADEON 9250 Series
(PCI) 17 SoundMAX Integrated Digital Audio
(PCI) 18 Intel 82801EB Ultra ATA Storage Controllers
(PCI) 18 Intel 82801EB USB Host Controller
(PCI) 18 Intel(R) PRO/1000 CT Network Connection
(PCI) 19 Intel 82801EB USB Host Controller
(PCI) 23 Intel 82801EB USB2 Enhanced Host Controller
Show me the problem?
Josh Straub (link) - 25 01 09 - 20:11
@Josh: you're not using any IRQ-intensive device:
- The SoundMAX Digital Audio is an AC'97 codec, it doesn't generate as many interruptions as a dedicated card: even a "basic" Sound Blaster PCI128 with full acceleration (read: plain dumb hardware mixing, no EAX or such) can generate close to 3000 interrupts/second, while such a codec will generate around 150/sec.
- the others are not very intensive devices either: the network adapter relies upon the CPU for its logic, the USB host controllers don't generate many interrupts either, and a graphic card of that kind, if only because it has a 255 PCI cycles latency (instead of default 64; I could somewhat reduce crashes by setting my system to a default PCI latency of 128) will generate even less interruptions.
As such, your configuration will NOT make Windows 2000 unstable: you have no IRQ-intensive hardware installed. Mine wasn't unstable either unless I used accelerated audio (which did improve performance notably) without disabling ACPI support: it would then regularly give me a IRQ_NO_LESS_OR_EQUAL error on the sound card driver (I mean, mixing 64 samples requires 64 CPU copies per time slice, thus 64 interrupts/time slice; if the mixing is done in software, the CPU is the one mixing said 64 samples, and then transfers it to the DAC in one go - one interrupt/time slice). Otherwise, Windows 2000 (especially after SP3) was remarkably fast, stable and resources-friendly.
If you read correctly, it means that a hardware device that generates lots of interruptions, stacked with other devices that generate a few themselves, will lead to the OS having to sort through a haysack to sort its interrupts; if you give a highly IRQ-active device its own interrupt, no sorting is necessary. No sorting necessary means you're much less likely to have the OS goofing up.
And Windows 2000 will not do that very well in interrupt storms; and since the only way to give a piece of hardware its own IRQ was to disable ACPI support to fallback to legacy PIC behavior, Windows 2000 didn't have a good reputation WRT its ACPI handling:
- can't negotiate IRQ storms
- can't modify IRQs in ACPI mode
- (not related, but added to the mix for completeness) no CPU power management through ACPI.
Windows XP did handle all of those better.
Mitch 74 (link) - 28 01 09 - 06:09
I think you're mixing up interrupt requests (IRQs) with direct memory access (DMA).
An interrupt request (IRQ) causes the CPU to temporarily postpone whatever program it is running and jump to a special routine. This is enormously disruptive to execution and can cause thousands of cycles. A direct memory access (DMA) only pauses the CPU while another device accesses main memory -- this is an order of magnitude faster and might only stall the CPU for a few hundred cycles, or even much less if main memory is not busy. A sound card reading 64 samples does not issue 64 interrupts, since it simply reads the data via DMA. Even old ISA SoundBlasters used DMA for playback and only issued IRQs at sound buffer breakpoints. Good cards will do large DMA requests to minimize DMA overhead.
The PCI latency timer has nothing to do with interrupts. It controls how long of memory bursts on the bus can be, which matters since the first dword transferred takes more cycles than successive words. Longer bursts permit higher throughput at the cost of devices having to wait longer for others to finish their bursts. The issue with the PCI SoundBlaster cards is that apparently many of them had stupidly tight requirements on DMA timing, so Creative hacked the chipset registers to lower the PCI latency timer to prevent breakups. The side effect of doing this was a drop in PCI bus throughput.
IRQ_NOT_LESS_THAN_OR_EQUAL also doesn't necessarily have to do with hardware IRQs. The Windows NT kernel also has software interrupt levels it calls IRQs, and that bug check code often simply indicates a crash caused by a driver that tripped a software IRQ level check in the kernel. For a while, the SoundBlaster Live! drivers were astoundingly unstable on dual-CPU systems and would often bugcheck with that code.
Phaeron - 30 01 09 - 01:30
"Show me the problem?"
You are running in ACPI/APIC mode, the IRQ sharing is only in ACPI/PIC mode.
"and since the only way to give a piece of hardware its own IRQ was to disable ACPI support to fallback to legacy PIC behavior,"
Or, if your hardware have it, and newer hardware usually does (HT, SMP or multi-core hardware definitely does, as it is required for these to work), enable the APIC.
Another link for more on this:
Yuhong Bao - 30 01 09 - 04:17
>> "Server is so much better at..."
For Win2003 vs WinXP you gain:
1) at having unconstrained IIS installations for web development
2) at having multiple session remote desktop
3) no limits on incoming and outgoing SMB connections
4) much higher limits on concurrent TCP/IP connections
1) some incompatibility here and there, usually editing the installed is enough (there are tools for that)
2) no bluetooth
With Vista many points are solved (for example IIS).
PurpleDust - 31 01 09 - 13:53
@Phaeron: I agree, in the sense that yes, NT5.x can handle virtual IRQs and bad drivers could trigger said check in the kernel. However, from what I could find on Creative fora at the time, my own problem was caused by the driver using any interrupt above 15. I reproduced the crash in WinXP on the same hardware by manually setting the sound card's interrupt above 15 instead of 11 (the interrupt I usually gave it; for some reason, it also disliked interrupts under 8, where the driver refused to initialize the card, go figure).
About the PCI latency timer: sorry about that, I was sure it specified how many PCI clock cycles a device could keep for itself (since PCI is a shared bus) before giving back the hand, and I thought interrupts couldn't be sent outside of a peripheral's tour (thus, modifying this timer would cause a change in interrupts timings). Any way, for some reason lengthening this timer helped marginally, but it wasn't a magic fix.
@Yuhong Bao: my POST listed the interrupt it gave to the card (interrupt 11), 2k would re-map it (interrupt 19), while XP would keep the BIOS's choice (interrupt 11). As such, I guess the problem was solved by VIA around that time (the mail you linked to is dated December 2001, the KT266a's white paper embargoed itself to September 2001), meaning that the following applied to my machine: "In Windows XP, I made a bunch of changes. In machines without cardbus controllers, (which don't have the IRQ problems created by PCMCIA,) it will try to keep the PCI devices on the IRQs that the BIOS used during boot. If the BIOS didn't set the device up, then any IRQ may be chosen. "
Which could mean that the KT266a chipset behaved like, say, Intel's i810. so, @Josh Straub: find a Sound Blaster PCI128 (the one based on an Ensoniq chip), install it on your machine, use the latest driver available for it on Creative's website (it is from 12/2002), put the sound acceleration slider to "full" (it defaults to "basic") - and run a game (I found the second Final Fantasy Online bench an ideal way to cause the crash). Said driver is the same for Windows 2k and XP.
@PurpleDust: what you cite is true, and is at the core of the problem: do you need concurrent TCP/IP connections, IIS, multiple remote desktops, multiple SMB connections for gaming? Pretty much all the time, the answer is no (except if you run an UT2003 server+patch repository on this machine for a more-than-10-machines LAN+UT wiki, and also run the client on it; it is a very unlikely scenario).
Mitch 74 (link) - 02 02 09 - 05:39
If everyone could do that, the world would be a better place, so why not? :P
Gabest - 02 02 09 - 09:01
I do play games on Windows Server 2008 and this is for a good reason. I have a few dozen licenses bought for vritual machines, but not all are used so I put one on my desktop. The big difference versus Vista is that 2008 Server takes less RAM with the same configuration (tested and confirmed), about 2-300 MB less, and also is more responsive - not sure why, maybe Vista has a lot of crap on top of the OS and 2008 Server has less, but it is faster to boot or even to open Outlook. If Windows 7 will have the perceived performance as 2008 Server and a similar memory footprint I will change to Win7, until then for me 2008 server is a better Vista than Vista.
AdrianB - 02 02 09 - 17:25
@AdrianB: ah - of course, I had indeed forgotten about Vista. Bloat is, indeed, something that isn't associated with a server OS, and Vista redefines Bloat. Server 2008 may be the only solution to play a DX10 game without having 600 Mb of RAM used up by the OS alone - which is critical on 32-bit systems and modern games that enjoy more than 2.5Gb of free addressable RAM; effective RAM addressing range under Vista with a 1 Gb frame buffer: less than 3 Gb, add the 700 Mb required by Vista + antivirus + other knick-knack, obtain swap (which effectively disqualify Vista as a gaming platform).
Mitch 74 (link) - 03 02 09 - 04:49
I agree with AdrianB. I have a laptop that came with Win Vista, and I installed Win2k8 side by side, set up exactly like Vista, with Visual styles and everything. The whole RAM usage was 450MB LESS than Vista. That much freed up is a godsend for a laptop running on Intel GMA X3000.
Pharaoh Atem (link) - 04 02 09 - 16:50
"@Phaeron: I agree, in the sense that yes, NT5.x can handle virtual IRQs and bad drivers could trigger said check in the kernel. However, from what I could find on Creative fora at the time, my own problem was caused by the driver using any interrupt above 15. I reproduced the crash in WinXP on the same hardware by manually setting the sound card's interrupt above 15 instead of 11 (the interrupt I usually gave it; for some reason, it also disliked interrupts under 8, where the driver refused to initialize the card, go figure)."
Sounds like the driver were having an issue with the APIC, and you had to switch to a PIC HAL to fix it.
"my POST listed the interrupt it gave to the card (interrupt 11), 2k would re-map it (interrupt 19), while XP would keep the BIOS's choice (interrupt 11). As such, I guess the problem was solved by VIA around that time (the mail you linked to is dated December 2001, the KT266a's white paper embargoed itself to September 2001), meaning that the following applied to my machine: "In Windows XP, I made a bunch of changes. In machines without cardbus controllers, (which don't have the IRQ problems created by PCMCIA,) it will try to keep the PCI devices on the IRQs that the BIOS used during boot. If the BIOS didn't set the device up, then any IRQ may be chosen. "
Nope, I suspect what was happening here is that you installed 2k as "ACPI Uniprocessor PC" (meaning ACPI/APIC/UP), while you installed XP as "ACPI PC" (meaning ACPI/PIC).
Yuhong Bao - 11 02 09 - 00:36
"NT5.x can handle virtual IRQs"
The extra IRQs in APIC mode are not virtual, they are actual physical interrupt lines. Older APICs (such as the 82489DX or the one inside the SIO.A or PCEB/ESC southbridges) only had 16 interrupt lines, newer APICs such as the 82093AA, which is normally used with the PIIX3 and PIIX4 southbridges, or the APIC built into the ICH and later, has 24 or more interrupt lines.
Yuhong Bao - 11 02 09 - 04:34
BTW, back when I/O APICs (when I say APIC, assume that I am referring to the I/O APIC, not the local APIC unless I say otherwise) had only 16 interrupt lines, PCI interrupts was routed to both 8259s and APICs using a programmable interrupt router (PIR), which the BIOS controlled back then. Then around the time OSes were beginning to control the PIR (which was called IRQ steering), APICs began to feature 24 interrupt lines. On APIC systems with 24 interrupt lines, the PCI interrupts were routed to IRQs above 15, and IRQs below 16 were solely reserved for ISA devices. This eliminated any chances of PCI interrupts conflicting with ISA interrupts, which was a big source of plug-and-pray hell back then. Thus the only PCI interrupt that should be below 16 in an APIC system is the fixed ACPI SCI. This is why I suggested that you might have installed 2k as ACPI/APIC/UP and XP as ACPI/PIC.
Yuhong Bao - 11 02 09 - 04:51
"Thus the only PCI interrupt that should be below 16 in an APIC system is the fixed ACPI SCI."
Assuming again that you have the PIIX3 southbridge or later, which is certainly true if you have ACPI.
Yuhong Bao - 11 02 09 - 04:52
"(the interrupt I usually gave it; for some reason, it also disliked interrupts under 8, where the driver refused to initialize the card, go figure)."
The 8259s were in ATs and later actually 2 cascaded 8259s, with the master 8259 servicing IRQ 0-7 and the slave 8259 servicing IRQ 8-15.
Yuhong Bao - 11 02 09 - 05:46