sqInt ioMicroMSecs(void))

Parms: NONE
return: return milliseconds via primitiveMillisecondClock
From: Interpreter,others

returns millisecond clock value.

Return the ms clock time, or some ms counter.
May start from zero, may wrap (as a result of overflow, or a result of clock time reset)
We note the ms clock time does not need to sync with the ioSeconds
On older macintosh VMs the clock value would start from zero
This value may wrap, the wrap limit is likely hardware and operating system dependent.

Note ioLowResMSecs was depreciated, may still be in some api platform code. Historically there were three clocks on the macintosh ioLowResMSecs, ioMicroMSecs, ioMSecs which varied in accuracy, and expense. The VM and image may check the clock as much as 1000 a second (sillly)
From sq.h

The primary one, ioMSecs(), is used to implement Delay and Time
millisecondClockValue. The resolution of this clock
determines the resolution of these basic timing functions. For
doing real-time control of music and MIDI, a clock with resolution
down to one millisecond is preferred, but a coarser clock (say,
1/60th second) can be used in a pinch.

The function ioMicroMSecs() is used only to collect timing statistics
for the garbage collector and other VM facilities. (The function
name is meant to suggest that the function is based on a clock
with microsecond accuracy, even though the times it returns are
in units of milliseconds.) This clock must have enough precision to
provide accurate timings, and normally isn't called frequently
enough to slow down the VM. Thus, it can use a more expensive clock
than ioMSecs(). This function is listed in the sqVirtualMachine plugin
support mechanism and thus needs to be a real function, even if a macro is
use to point to it.

There was a third form that used to be used for quickly timing primitives in
order to try to keep millisecond delays up to date. That is no longer used.

By default these are defined in sq.h as
sqInt ioMSecs(void);
sqInt ioMicroMSecs(void);

#define ioMSecs() ((1000 * clock()) / CLOCKS_PER_SEC)

ioLowResMSecs is used for some timer checking, and event polling, the ioLowResMSecs clock value is set every 16 ms via a timer thread.

ioMicroMSecs is calculated via some unix calls (gettimeofday) subtracted from startup time.
ioMSecs calls ioMicroMSecs

See iPhone

we and with & MillisecondClockMask (1FFFFFFF)
ioMSecs calls ioMicroMSecs
See Unix

unix calls gettimeofday) subtracted from startup time.
ioLowResMSecs is set via either the optional iTimer logic, or comes from ioMicroMSecs
ioMSecs calls ioMicroMSecs

& with 0x3FFFFFFF
WHY THIS VALUE VERSUS MillisecondClockMask?
ioMSecs may call timeGetTime or GetTickCount
The timeGetTime function retrieves the system time, in milliseconds. The system time is the time elapsed since Windows was started. GetTickCount The return value is the number of milliseconds that have elapsed since the system was started.

Windows systems may only be accurate to 5 ms.

Callers may not handle wrapping or at 2 billion milliseconds it could go negative
IN some places it does & with MillisecondClockMask.

Correct solution is to & with MillisecondClockMask at the point were the value is generated, versus relying on caller to & with correct value?

There are no comments on this page.
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki