I suspect people trying to find alternate CPU architectures that don't suffer from #Spectre - like bugs have misunderstood how fundamental the problem is.
Your CPU will not go fast without caches. Your CPU will not go fast without speculative execution. Solving the problem will require more silicon, not less.
I don't think the market will accept the performance hit implied by simpler architectures. OS, compiler and VM (including the browser) workarounds are the way this will get mitigated.
@HerraBRE you're right, BUT big caviat here: that this is necessary is software's fault. Remember, programmers add abstractions as fast as (often faster than) Moore's law. Our computers are ridiculously powerful and still would be even without OOO or speculative execution, we've just grown accustomed to hugely overpowered machines and designed our software with that in mind.
@tekk @sir @clacke Itanium had lots of problems. Lack of backward compatibility just meant people weren't willing to adopt it just to get 64 bit. Intel also made the mistake of trying to use it to get people to use their own compiler IIRC, just like they've done with TBB and their other proprietary crap. None of these problems would happen with a more open approach.
(there's a history of companies making custom chip for Java, the latest I can think of is Azul's Vega although that's tuned more for massive multi-core concurrency)
This is essentially what intel bet on for the itanium architecture. It failed spectacularly in this aspect for two reasons: Compliers are pretty good already, and while you can improve them some, the biggest gains are in the past. Also, a lot of the branching is dependent on runtime data, the compiler can at best know which is likelier to appear, but the CPU either knows or can take both paths when it doesn't.