Jump to content

SchizophrenicMC

Members
  • Posts

    3787
  • Joined

  • Last visited

Everything posted by SchizophrenicMC

  1. For servers, we use integrated graphics, but only because we're the only people using the graphics outputs and they have a really hard time causing downtime for customers. Our workstations use expansion graphics to run dual monitor output though. Of course, between the age of our VGA cables, the age of the crash cart monitors, and the age of some of these server mobos it can take awhile to figure out which component is causing no display on a server that POSTs when viewed in IPMI. It's usually cables, but I've dealt with more than one failed mobo graphical out. When that happens, it necessitates a chassis swap, which is a pain for everybody involved. Now, I will grant that expansion peripherals fail a lot more often than integrated peripherals at work. Or, at least, we get alerted of it more often. Customers don't often realize their onboard SATA controller is doing weird things, not individual hard drives. (Though it is a fair assumption that a hard drive would be causing the trouble) When a dedicated RAID controller is doing something weird, it sends an automated alert to our management system. And to be fair, most RAID controller problems are actually related to their peripheral battery backups. Obviously, the 2 most common problems are hard drives and RAM. But man is it a pain when something mobo integrated does funny stuff, because fixing that requires removing every component from a server individually, building a new server for the customer, carrying all the old parts, including the server chassis, up and across the building, and carting the new parts down and across for the new provision. And then you have to deal with the RMA process for the mobo and try to keep a customer happy with the extensive downtime they have to go through as a result of this ordeal. It's just not fun. I can replace a RAID controller in 20 minutes, with another half hour for automated setup and verification. Including provisioning time, it can take a whole shift to chassis swap a server.
  2. Just as an aside, I work for IBM, provisioning and maintaining hardware. The only integrated peripherals we recommend using are the NICs and onboard IPMI/KVM. Unless the machine is being used in a 10G application, in which case, PCIE NICs. We won't provision a machine with SSDs or any kind of SAS config without a standalone RAID controller, and we don't support soft RAID. All hard drives are physically connected to backplanes, even in applications where only one drive will be used. We even used to use standalone IPMI cards, but our mobo supplier finally started making integrated IPMI systems that actually work. Usually. They're still pretty bad about randomly resetting their IPMI configuration. Dev is apparently working on this, but it still requires tons of manual (read: time-consuming) intervention. Actually we've got serious problems with newer mobos that have USB3.0 losing their 3.0 ports. BIOS doesn't recognize them 70% of the time. We require USB2.0 on all machines for direct console use because onboard 3.0 just isn't there. Reliability is our number one concern, and we've got it for the most part. But, parts fail. That's a fact. And if you have too many parts integrated into the mobo, you will be replacing a lot of mobos. That's expensive and time consuming. I think multiple graphics cards is dumb for gaming, but if you plan on rendering any video or CGI, you'll need a lot of time with only one card. And if time is money, you might be better off spending some money to save some render time. I don't play a lot of games anymore, so I'm looking at all of this from a productivity perspective. Sometimes you need a lot of peripherals, a lot of write speed, and a lot of graphics crunching. I know for a fact we have one video crunching server in my DC with 2 graphics cards, 256GB of RAM, and 4 1.2TB SSDs in a RAID10, with an additional 4 2TB HDDs in another RAID10. You do not want to know how much that server costs the customer monthly.
  3. I'm not such a huge fan of onboard peripheral controllers, though. I mean A) there's only so much room in an ATX-compliant IO form factor, and B) I've had enough integrated peripherals fail with the only way to replace them being replacing the whole mobo, which is usually more money than I have to spend to fix a sound controller dropping out or a fixed USB bus freaking out because it thinks it's saturated on a keyboard because mobos are built too cheaply. Maybe the problem is I keep owning junk mobos. Maybe that's everybody's problem. Maybe that is the single greatest problem in modern computing: Motherboards suck. And making the motherboard do more things may be marginally faster, but I'd really just rather make them better at letting a bunch of expansion cards use other system resources more efficiently. Then I wouldn't be so tied down by making the choice between one set of features I want and another, like having a lot of stock IO but not a lot of expansion capacity, or having the choice of either SATA 3.0 support OR USB 3.0 support between 2 boards. Just give me a lot of expansion capacity so I can build the computer I need. You know, I mean, we've already detached graphical output from most non-OEM mobos. Of the two human interface devices a computer needs, the market has already decided that the output device is suitable to be left to expansion controllers alone. I'm just a little tired of dead USB buses trashing an entire motherboard.
  4. I feel like we're finally getting what we were promised as children by the NAR ads
  5. The problem with hardware standards is, they have to be future-proof to justify their development cost, but hardware is impossible to future proof because Moore's Law keeps making electronics more powerful and suddenly your hard drives are using form factors developed decades ago when a few megabytes of storage was incredibly large, only now you're choked because you can get mechanical drives with several terabytes for less than $100, or solid-state memory devices that have theoretical data access speeds exceeding any reasonable standard. Not to mention it takes a few years from conception to implementation, in which time the standard comes out behind the technological curve to begin with. It's a wonder any of them work, and that PCIE works as well as it does. But seriously, we need more PCIE lanes. Like a lot more. I'm finding myself choked for peripheral support here. The problem with PCIE is it's a really large form factor that doesn't work if you try and cable it, so you're super limited by packaging and cooling constraints.You have to put big multi-pin cards right on the motherboard, eating up all the precious little space at the back of the case and leaving a little bit in the front, which isn't a big problem if you have one or two PCIE peripherals, but if you've got 2 graphics cards, a couple SSDs, a sound card, and a USB hub, you just found yourself on an EATX or WTX motherboard and a full tower case you don't have room for. And no more room to expand your peripheral selection. (In any case, I think there's only one 6-PCIE mobo on the market right now anyway, and if you double up on graphics cards on that, you can only use 5 slots anyway because of packaging) And sure that's an extreme case, but there are workstations with greater needs than even that.
  6. SATA 3.2 runs to a maximum 16Gb/s bandwidth. The only SCSI standard that exceeds that is SAS 4.0 which is just SATA with the voltage turned up and different software. Even Fibre Channel 16 only runs 13.6Gb/s maximums. Thunderbolt 2 does reach 20Gb/s throughput, but let's not give credit to Apple for that one. Intel made it when they decided it would be cheaper to modify a USB standard instead of making a new optical standard. The only real disadvantage I've run into in SATA is the inability to chain drives through the controller standard. The physical standards for SATA work well enough, given SAS 4.0's performance on the same, and notably its ability to chain SATA drives to a SATA-based controller with SCSI logic. (Though, given, the controllers all seem to interface with PCI; mobo-bound SATA buses are slow) The only reason the connectors are junk is because you buy junk connectors. (Or they come with your junk Mobo) I've seen all kinds of SATA connectors, and some are trash and some are quality pieces. It's not the connector standard's fault, blame the manufacturer. But the connectors do have a good bit of thought given to them. There are 3 pin lengths between the male and female ends, which natively allows for hot-swapping drives without damage or discontinuity, something that has to be shoved off onto the controller standard in other attachment standards. Actually, I'm starting to think we should just cover mobos in PCI-E slots and run all peripherals off the PCI buses rather than trying to build a hard disk bus directly into a motherboard. I mean, if you want performance, that does seem to be the way to go, either through direct PCI hard drives, or PCI-based drive controllers running your SATA/SAS/Favorite Standard drives. Who needs thermal control?
  7. Mazda have said time and again recently that they were abandoning rotary engine development due to fuel economy problems, but the rumors that have been floating around say they've continued development of the 16X architecture since 2011 when it was last publicly shown. Based on the teaser, there's no way this car is based on the ND platform. It's too big. This is Mazda we're talking about, mind. When they say "sports car" they mean a car with rear wheel drive, a front mounted engine, and a focus on the driving experience. And they're not particularly known for vaporware concept cars. The Shinari concept lent most of its design aspects to the Mazda6. The practical ones, anyway. A concept is a concept after all. Then again there was the Hazumi concept, which is basically a Mazda2, full stop. But Mazda has never made a non-rotary sports car that wasn't a Miata. By the time the Cosmo got its first set of pistons, it wasn't a sports car anymore. (And even then, they had available rotaries) This might happen. I mean, yeah they have the deal with Toyota, but I can't imagine Mazda building a badge platform for Toyota. Subaru got shafted when they did it, and Mazda isn't keen to make other carmakers' mistakes.
  8. Disregard Cadillac, All Glory to Mazda: Mazda to Unveil Concept Car at Tokyo Motor Show There's no way this could be, but could it? Could it really? No. No, Mazda said they wouldn't. But would they? The month leading up to Tokyo Motor Show is always a painful wait.
  9. You know, if Funimation gets in on this Iron Blooded Orphans thing, I'll have to watch it, just to push Funimation toward giving us some Funi-quality NA releases in the future. Say what you will about them, they don't half-ass their NA releases.
  10. There's a surprising amount of thought that's gone into SATA. The only thing I really mind about it at this point is the maximum 6GB/s throughput. Everything else is intelligent enough to kind of just work, but the data bandwidth is sorely lacking compared to other standards that have improved in recent time, particularly PCI-E.
  11. At work, we use a lot of SATA cables. Kinking the cables for management isn't a problem for them, and you can figure out a way to kink the cable to force it into the connector on your drive, probably. None of the SATA cables we use have clips, they're just held in by the connector's fit and the cables keeping tension on them. Most of them are right-angle connector style with the cables jammed into the bottom of the chassis for extra stability. On the larger servers with 12 or more drives, we do use Mini-SAS connectors, which do have clips, but the connector style is totally different anyway.
  12. I've played the AWD game, more than I've done pure RWD, but even if it is technically faster, it's not as fun. Decided I wouldn't take the cop magnet out tonight though. Last Saturday of the month, cops trying to meet quotas, and my wangan loop has massive construction. Also the tires are even more dry-rotted now than before, and the distributor keeps retarding because of the missing second hold-down screw. Also no insurance and the car is 3 years out of inspection and a year and a half out of reg.
  13. I feel you there. Texas is a lot nicer than the pundits would have you believe, but western Washington is just gorgeous, and in many ways better-sorted than Texas. I bet the temperatures are reasonable there right now too. It's 90 right now in Arlington.
  14. Any time I see a FWD sideways, I roll my eyes. Minor oversteer is one thing, but all the way sideways, you're just mad because you didn't get RWD. And was anyone really surprised about that outcome?
  15. Now that you mention it, it did look like Zeon was going to come out on top by the end of the year, but in the middle of September the first event that would lead to their eventual defeat occurred.
  16. You don't have to live there, but you do have to be able to commute to work every day, and because Silicon Valley, it doesn't get cheaper enough in the surrounding area to make $70,000 worth that much as an aerospace engineer. It wouldn't be so bad in Fort Worth or Cleveland or even Redmond, Washington. But $70k within reasonable driving distance of Palo Alto is pretty weak.
×
×
  • Create New...