Boot Times vs. Non-Volatile Main Memory (was: 433AU workstation for sale)

On Jan 18, 6:11 am, Paul Sture <p...@xxxxxxxx> wrote:
On Thu, 12 Jan 2012 05:24:25 +0100, Fritz Wuehler wrote:
   DEC hardware runs Windows.  Both x86 and Alpha.  Just don't try to
   get anything current.

As with most things, older is better when it comes to Windows. Win 95
may crash alot but it boots alot faster than newer versions.

Get a Mac :-)

We timed a MacBook Air with SSD and running the latest OS version at 12.5
seconds from cold start to the login prompt recently.

I've a feeling that PDP 11/34 running RT-11 over 30 years ago could have
beaten that though :-)

Interesting this predilection for fast booting times as a primary
design goal for many operating systems. I have also read that this is
one of the primary improvements for Windows 8.0.

Considering the general direction of memory technology developments
toward faster non-volatile main memory using Flash Memory and
potentially memristor-based memory in the future, I think it would
make more sense to concentrate on making operating systems that are so
stable that they almost never need rebooting. Then the boot time
should be rather unimportant. True OS stability is not a simple goal
to attain, and generally requires that the OS structures be designed
from the start with stability as a/the primary goal. The combining of
the the roles of secondary and main memory into a single non-volatile
main memory, in which the complete virtual memory can live in
principle forever without a reset, is already starting be realized to
some extent in smartphones today.

For such long up-times, OpenVMS has several architectural and
practical advantages in its historical design goals related to
stability and mission-critical capability.

I consider this being one of next major developmental challenges for
operating systems development in the future: How to best take
advantage of new hardware with main memory that is fast, plentiful,
persistent and non-volatile?

Eventually memristor technology may also add the design challenge that
the memory also provides switchable/executable operations embedded in
the memory structures. So future OS developments should also be open
to such possibilities in their design.

So far OpenVMS has mostly kept pace with the main trends of HW
evolution. I'm also hoping OpenVMS will also have the chance to adapt
to such evolutionary HW developments in the future.


Keith Cayemberg


Relevant Pages

  • Re: books for embedded software development
    ... I believe now the expectations are way beyond the current hardware design. ... of memory problems, ... The loader is capable of executing few ...
  • Re: Chucks plan
    ... from having an automated memory bus, rather than the software driven bus? ... design was different, had to be designed separately, had ... This required predicting which memory chips will be most ... done with a Forth core and software this way. ...
  • Re: large binary immediately SEGVs
    ... said even running ldd on the load module caused ldd ... insisted in reducing any problem to a 4 assembler-level instruction ... memory, the text to be printed was clearly "Ready: ... Now you do not just walk into a design review and throw different ...
  • Re: books for embedded software development
    ... of memory problems, ... hardware which gives control over the CCD functionalities, ... in order not to invest too much time in the "wrong design" and ... the resource requirements of each. ...
  • Re: IVR Capcity
    ... >May you please explain to me better the design issues of point 2? ... interface to 10-15 separate gigabit-Ethernet 'local' network segments. ... you have to be able to transfer it from main memory to the output device ... speeds of _2_ gBytes/sec. ...