jamahir
ELITE MEMBER
- Joined
- Jul 9, 2014
- Messages
- 28,132
- Reaction score
- 1
- Country
- Location
lol what does that mean?
Dude, i am just trying to find the correspondent CPU level we are looking at. Do you think my view is wrong?
you were comparing processors based on their clock speed, which is not the way to measure processor performance.
think of this... elbrus at 750 mhz performs like a ghz level intel processor ( from what i have been reading ), especially because elbrus has native facility to emulate intel x86 instructions without any troubles.
but the natural design of a processor is where it works at 0 hz... that is, the instructions work without a clock... google for clock-less processors... and i am participating in design of one since five years.
so how would a compare a clock-less processor to one from intel running at 2.5 ghz... what will be your factors when used in general mode... additionally, performance also depends on the control program ( operating system ).
Have you done CPU design?
funny that you quoted me for the very thing i was replying to jhungary for.
i have been participating since five years in design of a clock-less processor which has a much simplified and optimized architecture... less than 20 instructions, long-word instructions, new way of i/o and other things.
Have you done CPU design? One class in college had me doing a partial design of the PDP-8 cpu. Trying to get everything working reliably without a clock would be a whole new class of complexity, for only a tiny (potential) performance improvement. The clock is how the flip-flops go from remembering state to transitioning to a new state. Removing the clock would require rework of so much basic design as to be impossible.
you are saying that for two reasons...
1. not many have attempted such a thing, so not many ready references.
2. the extra number of signals for asychronous actions.
of course it requires big rework... what's the problem in rethinking processor design?? my project has done it.
RISC designs are a better approach, and hybrid RISC/CISC designs have already beaten those.
risc is indeed better... but for hybrid example, let us consider pentium 4 ( hybrid ) to arm latest ( risc )... even removing the on-board gpu, arm gives good performance with respect to battery power, yes?? we cannot miniaturize pentium 4 and jam it into a cell phone because p4 will drain the battery fast.
Someone may have done some theoretical research on performance gains from eliminating the clock, but I really doubt there is much to be gained there.
not only theoretical, there has been a arm clock-less release but it didn't really take off, i would say for capitalist reasons... the arm996hs... ( ARM offers first clockless processor core | EE Times )... and there has been a older british university work called amulet... ( University spinouts revive clockless processors | EE Times ).
Theoretically, you could arrange the timing so that each layer of logic was timed per it's own glitchiness and distance from the previous layer, but as a practical matter, stuffs gonna have to line up for execution of instructions anyhow. So, you'd wind up needing to wait on the slower links anyhow. May as well just clock it and tune the clock for worst case. The difference just isn't going to be that big - and given how many instructions will be executed per second, any measurable error rate is unacceptable.
i somehow don't seem, right now, to relate my project's simplifications/optimizations with the problems you describe... allow me to look at your description at a slow pace and then reply to you.
Last edited: