Computing desk | ||
---|---|---|
< January 24 | << Dec | January | Feb >> | January 26 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
What is the advantage of 4G mobile phones over 3G ones? KägeTorä - (影虎) (もしもし!) 00:52, 25 January 2016 (UTC)
When you type something like
./mysql -u root -p
What do the u and p signify? 68.142.63.182 (talk) 02:15, 25 January 2016 (UTC)
Thank you. 68.142.63.182 (talk) 00:48, 26 January 2016 (UTC)
Some CPUs have built-in GPUs which eliminate the need for a graphics card for some applications. If the processor is used in a motherboard with a graphics card inserted though, does this mean that the GPU built into the processor is a waste or does it still contribute? If not, is it a sensible strategy to look for a processor without GPU to save money? --78.148.108.55 (talk) 13:22, 25 January 2016 (UTC)
If you have an AMD GPU and the right AMD APU, these can work in a Crossfire config. But because AMD hasn't updated their APU GPUs for a while, you're limited to fairly old cards. And the benefit is small anyway if you have a mid range card. And if you have cross vendor GPU-CPU (i.e. NVIDIA - Intel, AMD/RTG - Intel, NVIDIA - AMD) hasn't really been possible for a while. Okay LucidLogix Virtu MVP tried something like that but IIRC it was worse (and worse supported) than AMD's Crossfire setup so never really took off and seems to have been largely abandoned.
Theoretically and especially with GPGPU it's possible for both to be used. Practically this rarely happens for home users. (I guess some miners and others who go out of their way may use both.) It's possible that DX12 will change things, but it's hard to say whether this is really going to happen. [3]
As for your suggestion of a sensible strategy, the answer is mostly no. For starters since we're so far into the APU/SoC era, very few CPUs don't have GPUs particularly those targeted at home users. More significantly, particularly once you get past the low end, the connection between CPU cost and production cost is very teneous. It's really all about coming up with different products for the different market segments, disabling features as needed (often not because they are broken or this saves costs but because you know people will be willing to pay more for them). And considering the poor situation AMD is in, it's really mostly Intel we're talking about here. But Intel has no real interest in encouraging the standalone GPU market.
The best you can is is if you're planning to get a standalone GPU, don't worry about the iGPU. But even this is limited utility since the best CPUs tend to have the best GPUs. (There are exceptions, particularly for Intel's top end iGPUs.)
Perhaps Apple's implementation was better at the time, but it definitely wasn't the first. (Perhaps it still is better particularly in terms of compatibility and/or driver updats.) In fact pretty much everyone had it before that Apple implementation as your article sort of says. (To be fair my understanding is Apple also had it in some form before the article you linked to.) And as I hinted at above other vendors mostly still have it now.
Anyway, since the OP appears to be interested in desktops or similar (as I said above, they mentioned graphics cards), it remains unexplained how Apple is making "intelligent use of all the hardware they've got" for the use case the OP appears interest in.
Nil Einne (talk) 17:37, 26 January 2016 (UTC)
With laptops, you haven't really been able to design them yourself for a long time and pretty much all system designers have been using both GPUs in some fashion for a long time, before 2010 or GPUs integrated on to the CPU. So beyond it not being what the OP seems interested in, it doesnt seem to help much. Laptops with Linux are to some extent the only real exception since switchable dual graphics support has often been limited or if you were installing Windows yourself you do have to be careful with drivers. (Likewise if you really were designing the system yourself you do need to take a bit of care to ensure switchable dual graphics works.)
Getting back to my earlier point, it's actually been possible to use both GPUs in some fashion for a long time, especially after GPGPU began to become a thing (which was before iGPUs existed). This has been supported at some level by the OS and the systems as designed. Even if you were assembling your own system, you didn't really need to do much a lot of the time. But while it's been supported, as mentioned in my first post, it hasn't AFAIK actually been used much. This is for a variety of reasons including that the support wasn't that good and that software designers just didn't feel it was useful particularly considering the compabitility problems that can result (which to some extent relates to the support issue). For the earlier part you can I guess blame it on the system designer. For the later part, it doesn't make much sense to blame it on the system designer. Unless you use the odd definition of system designer where when I buy a Mac Pro or iMac or Alienware desktop or HP desktop or whatever from my local store and take it home to play GTA5 and Fallout 4, I'm a system desiger. (But maybe not if I bought a Dell laptop or Mac Book Pro?)
Ultimately whoever you want to blame it on and whatever you want to call them, the point is as an end user you have limited choice. If your software doesn't use both GPUs and there's no software which will fulfill the same purpose in every way but will use both GPUs and be better for it, then there's no much you can do. Except code your own software which makes little sense for most users. It gets even worse if you're talking about games. If I want to play GTA5, I'm not that likely to choose some other game just because it uses both GPUs and coding your own GTA5 or even hacking it to use both GPUs is most likely untenable even for an excellent coder.
And unless you actually have a need for the software which will use both GPUs, it doesn't make sense to run it just because the GPU is otherwise going unused. Given idle power improvements, using the GPU on the CPU will generally mean more energy consumption and heat generated which even in winter and if you use electrical heating is IMO not necessarily useful. More significantly, if the CPU supports some sort of turbo mode, using the GPU may mean the CPU which you may be using for stuff you actually want isn't clocking as high or as long. And that's not even considering possible slow downs due to the memory subsystem or CPU or other elements being used by this program you don't actually have any usefor but are just using because your GPU on CPU would otherwise go unused.
What this all means, and to get back to the OP's original point is that you may have to accept that your GPU on CPU is simply going to be unused. From your POV, it may be better if the CPU doesn't have a GPU since it's going to waste and may increase power consumption a little even when unused. But since you aren't Intel and can't control their marketing decisions, the most intelligent thing to do is to choose the best CPU based on price-performance that fits your purposes and budget. Which may sometimes mean a lower end GPU, but often isn't going to mean no GPU. To be fair, this isn't unique to Intel, all companies add features to their products for a variety of reasons and some of these features are going to be unused by a fair few end users. And just as with these cases, it may seem a bit stupid to have this feature you aren't going to use, but if it isn't causing problems you should ignore it and concentrate on the features you do want and the price.
If you really want to look in to it, LucidLogix Virtu MVP that I mentioned before is actually an interesting case study IMO. As I understand it, it was initially at least dependent on the system or motherboard. (I'm not sure if this changed. I didn't follow or research that closely when reading this except to check that it exists. Most results seem to be old probably for the reason mentioned, it didn't have much success so no one cares anymore.) But I think this was a licencing or compatibility thing, it was otherwise purely software and just required the 2 GPUs. So theoretically the system designers did provide something to use both GPUs (just as they did in the early pre iGPU days when they supported switchable graphics with IGPs and discretes).
But as I mentioned, this seemed to largely fail. Whether it was because the software wasn't that good (compability problems etc) or it didn't help enough to be worth it, or it did help a fair amount but people didn't realise; the technology mostly failed. So who you want to blame it on is complicated. FWIW it was still supported up to Windows 8/8.1 at least, not sure about 10 but I guess you could still try it if you think people were wrong to reject it. One thing which I perhaps didn't make clear enough until now, perhaps the reason why these all failed is because the actual advantage you get from using the often very slow iGPU when you have a much faster discrete GPU is very limited. (Which is another factor not in favour of LucidLogix etc. These technologies add cost so they are added to expensive systems which are also the systems which tend to have very fast discretes.)
To be fair, with the iGPU improving combined with certain non graphical tasks which aren't particularly demanding compared to the graphics being performed on the GPU with games (like the physics or sometimes even parts of the AI) and where even the weak iGPU is a lot better than the CPU, it does seem like it would make sense to use the iGPU. And with the GPU capable of sharing resources with the CPU it can mean despite the low performance compared to the discrete it has particularly advantages. AMD definitely believed in the HSA for a long time (I think they still do) and there's also interest it in on mobiles (albeit these don't have discretes). So perhaps with DX12 and similar combined with other areas of progress, this really will finally take off. However since we can't be sure this will happen, I don't believe it makes sense to choose a higher end iGPU (or even an iGPU if you find you do have the choice) because you may want day use it.
P.S. It's possible even now you're one of the few that does have use for the iGPU. So I guess the OP should explore their software and check. If they find they are, then I guess you could say they're one of the lucky ones. It still doesn't change the fact that for most people, it seems intelligent use is no use and to answer the OPs question most of the time it does effectively go to waste but not the suggested strategy is probably not sensible.
I'd like to encode MKV files from a folder containing the contents of a DVD (VOB files etc). The DVD itself is a thousand miles away and I won't have a chance to collect it for three months. The VOBs appear to play correctly, though I haven't watched them all the way through, and Handbrake detects them as valid sources, but instead of 3 episodes of roughly 1 hour each, it shows 2, the second of which is roughly 2 hours. It must be missing the point at which the second episode ends. Without access to the original DVD, is there some way I can get Handbrake to see this? 94.12.81.251 (talk) 14:13, 25 January 2016 (UTC)
Why isn't everything, and not only the browser, sandboxed? At least, the common targets of virus and malware could be sandboxed.--Scicurious (talk) 15:53, 25 January 2016 (UTC)