Vulkan API (вышел!) (271 стр)
|innuendo||Постоялец||www||11 мая 2018||12:48||#4050|
> Так зачем оно нужно-то?
ну пусть Andrey и ответит :)
|g-cont||Постоялец||www||11 мая 2018||13:40||#4051|
им сначала было плевать на GL, а теперь точно так же плевать на Vulkan.
Правка: 11 мая 2018 13:40
|v1c||Пользователь||www||11 мая 2018||15:12||#4052|
The first lesson is: Nearly every game ships broken. We're talking major AAA titles from vendors who are everyday names in the industry. In some cases, we're talking about blatant violations of API rules - one D3D9 game never even called BeginFrame/EndFrame. Some are mistakes or oversights - one shipped bad shaders that heavily impacted performance on NV drivers. These things were day to day occurrences that went into a bug tracker. Then somebody would go in, find out what the game screwed up, and patch the driver to deal with it. There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games - up to and including total replacement of the shipping shaders with custom versions by the driver team. Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA? There you go.
The second lesson: The driver is gigantic. Think 1-2 million lines of code dealing with the hardware abstraction layers, plus another million per API supported. The backing function for Clear in D3D 9 was close to a thousand lines of just logic dealing with how exactly to respond to the command. It'd then call out to the correct function to actually modify the buffer in question. The level of complexity internally is enormous and winding, and even inside the driver code it can be tricky to work out how exactly you get to the fast-path behaviors. Additionally the APIs don't do a great job of matching the hardware, which means that even in the best cases the driver is covering up for a LOT of things you don't know about. There are many, many shadow operations and shadow copies of things down there.
The third lesson: It's unthreadable. The IHVs sat down starting from maybe circa 2005, and built tons of multithreading into the driver internally. They had some of the best kernel/driver engineers in the world to do it, and literally thousands of full blown real world test cases. They squeezed that system dry, and within the existing drivers and APIs it is impossible to get more than trivial gains out of any application side multithreading. If Futuremark can only get 5% in a trivial test case, the rest of us have no chance.
The fourth lesson: Multi GPU (SLI/CrossfireX) is fucking complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 1 and 2, and magnify them enormously.
|v1c||Пользователь||www||11 мая 2018||15:13||#4053|
* Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases.
* Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability.
* Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place.
* Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.
Part of the goal is simply to stop hiding what's actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow. The IHVs certainly weren't willing to disclose these things publicly either, as they were considered critical to competitive advantage. (Sure they are guys. Sure they are.) So the game is guessing what the driver is doing, the driver is guessing what the game is doing, and the whole mess could be avoided if the drivers just wouldn't work so hard trying to protect us.
So why didn't we do this years ago? Well, there are a lot of politics involved (cough Longs Peak) and some hardware aspects but ultimately what it comes down to is the new models are hard to code for. Microsoft and ARB never wanted to subject us to manually compiling shaders against the correct render states, setting the whole thing invariant, configuring heaps and tables, etc. Segfaulting a GPU isn't a fun experience. You can't trap that in a (user space) debugger. So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.
Three out of those four just made their engines public and free with minimal backend financial obligation.
Now there's nothing wrong with any of that, obviously, and I don't think it's even the big motivating raison d'etre of the new APIs. But there's a very real message that if these APIs are too challenging to work with directly, well the guys who designed the API also happen to run very full featured engines requiring no financial commitments*. So I think that's served to considerably smooth the politics involved in rolling these difficult to work with APIs out to the market, encouraging organizations that would have been otherwise reticent to do so.
[Edit/update] I'm definitely not suggesting that the APIs have been made artificially difficult, by any means - the engineering work is solid in its own right. It's also become clear, since this post was originally written, that there's a commitment to continuing DX11 and OpenGL support for the near future. That also helped the decision to push these new systems out, I believe.
The last piece to the puzzle is that we ran out of new user-facing hardware features many years ago. Ignoring raw speed, what exactly is the user-visible or dev-visible difference between a GTX 480 and a GTX 980? A few limitations have been lifted (notably in compute) but essentially they're the same thing. MS, for all practical purposes, concluded that DX was a mature, stable technology that required only minor work and mostly disbanded the teams involved. Many of the revisions to GL have been little more than API repairs. (A GTX 480 runs full featured OpenGL 4.5, by the way.) So the reason we're seeing new APIs at all stems fundamentally from Andersson hassling the IHVs until AMD woke up, smelled competitive advantage, and started paying attention. That essentially took a three year lag time from when we got hardware to the point that compute could be directly integrated into the core of a render pipeline, which is considered normal today but was bluntly revolutionary at production scale in 2012. It's a lot of small things adding up to a sea change, with key people pushing on the right people for the right things.
Phew. I'm no longer sure what the point of that rant was, but hopefully it's somehow productive that I wrote it. Ultimately the new APIs are the right step, and they're retroactively useful to old hardware which is great. They will be harder to code. How much harder? Well, that remains to be seen. Personally, my take is that MS and ARB always had the wrong idea. Their idea was to produce a nice, pretty looking front end and deal with all the awful stuff quietly in the background. Yeah it's easy to code against, but it was always a bitch and a half to debug or tune. Nobody ever took that side of the equation into account. What has finally been made clear is that it's okay to have difficult to code APIs, if the end result just works. And that's been my experience so far in retooling: it's a pain in the ass, requires widespread revisions to engine code, forces you to revisit a lot of assumptions, and generally requires a lot of infrastructure before anything works. But once it's up and running, there's no surprises. It works smoothly, you're always on the fast path, anything that IS slow is in your OWN code which can be analyzed by common tools. It's worth it.
Мнение человека, работающего в Driver Team, о причинах появления Mantle/Vulkan
Правка: 11 мая 2018 15:15
|innuendo||Постоялец||www||11 мая 2018||17:01||#4054|
|g-cont||Постоялец||www||12 мая 2018||0:58||#4055|
А по хорошему драйвер должен был еще с начала 90-х орать на любую ситуацию и давать советы по оптимизации, вместо того чтобы "динамически обходить" ошибки разработчиков. Когда там GL_ARB_debug_output появился? В 2010-м году? Когда уже куча игр была написана.
>>Ignoring raw speed, what exactly is the user-visible or dev-visible difference between a GTX 480 and a GTX 980?
Ну кстати да. Навряд ли уже что-то новое появится.
Правка: 12 мая 2018 1:05
|innuendo||Постоялец||www||12 мая 2018||8:05||#4056|
> Самой большой ошибкой OpenGL
было то, что ушёл с рабочих станций, и каждый школьник теперь может хаять :)
> А по хорошему драйвер должен был еще с начала 90-х орать на любую ситуацию и
> давать советы по оптимизации, вместо того чтобы "динамически обходить" ошибки
> разработчиков. Когда там GL_ARB_debug_output появился? В 2010-м году? Когда уже
> куча игр была написана.
делал порт одного проекта под mac - и ни разу не возникало необходимости... если баг в драйвере, то никакой debug не поможет
Правка: 12 мая 2018 8:28
|Delfigamer||Постоялец||www||12 мая 2018||9:22||#4057|
> Ну кстати да. Навряд ли уже что-то новое появится.
А как же аппаратный рейтрейсер? Целое непаханное поле ведь, с во-от таким потенциалом накосячить так, что придётся ещё на несколько раз переделывать.
Правка: 12 мая 2018 9:23
|elviras9t||Пользователь||www||12 мая 2018||11:32||#4058|
> придётся ещё на несколько раз переделывать
Уже не придется... они все просчитали...
Правка: 12 мая 2018 11:33
|Delfigamer||Постоялец||www||12 мая 2018||14:27||#4059|
> Уже не придется... они все просчитали...
И что, прямо уже десяток готовых игр разных жанров на разных движках написали? И во всех у них прямо всё работает и ничего не лагает?
Правка: 12 мая 2018 14:29
|innuendo||Постоялец||www||12 мая 2018||14:58||#4060|
|g-cont||Постоялец||www||13 мая 2018||0:31||#4061|
> было то, что ушёл с рабочих станций
на рабочих станциях не возникает ошибок?
> если баг в драйвере, то никакой debug не поможет
Поможет понять что баг в драйвере как минимум.
> А как же аппаратный рейтрейсер?
Да я если честно разочаровался в этих рейтрейсерах. На каждый построенный кадр надо еще думать как нивелировать шум картинки. Т.е. мы имеем дело с артфектами совершенно иного рода. Раньше тени з-файтили, а теперь весь экран шуметь будет.
|innuendo||Постоялец||www||13 мая 2018||0:47||#4062|
> > если баг в драйвере, то никакой debug не поможет
> Поможет понять что баг в драйвере как минимум.
а то без debug этого не видно ? :)
|elviras9t||Пользователь||www||13 мая 2018||10:36||#4063|
Знаю только то что Visual Studio косячный IDE с косячным компилятором и дебаггером!
Так что баги Visual Studio я таки ловлю не первый год... Особенно Preview версий.
Правка: 13 мая 2018 10:38
|g-cont||Постоялец||www||13 мая 2018||11:02||#4064|
> а то без debug этого не видно ? :)
Смотря что считать багом в драйвере.