Home
Ratchet Library :: Ratchet Objects
API  ·  Manual

ratchet API Reference

Creation

Constructor

Objects are created with the ratchet.new() constructor, which takes a single argument. This argument is an entry point function, the first and only thread initially attached in the ratchet object. The returned ratchet object can be executed using the loop() method, or under certain circumstances the loop_once() method.

kernel = ratchet.new(function ()
    -- Attach other threads, do stuff...
end)
kernel:loop()

About Threads

The threads the ratchet library uses are not traditional threads, like you would create with the pthread library. They are coroutines, meaning only one of them will be running at a time. Unlike traditional threads, threads created by ratchet will only yield control when they make a call that pauses its execution, when they complete normally, or by error. The scheduling done by the ratchet library is completely controlled by two things: the order the threads were added or unpaused, and the order events are triggered by libevent.

Attaching New Threads

New threads are created with the function ratchet.thread.attach(). This function can only be called from other threads, with the first thread being attached by the constructor. The new thread is not started instantaneously, instead, once the current thread yields execution, all new threads are started in the order they were attached. The object attached to attach() does not have to be a function, it can any object implementing __call() in its metatable.

The attach() function returns a Lua thread object that was created. This is useful for unpause() and wait_all(), more on that in Scheduling. The current running Lua thread object can be retrieved with ratchet.thread.self(), which is identical to Lua's built-in coroutine.running().

local function dostuff(...)
    -- Processing...
end

kernel = ratchet.new(function ()
    ratchet.thread.attach(dostuff, 1, "abc")
end)
kernel:loop()

Thread-scope Variable Space

As an alternative to using globals or passing locals among different functions called within threads, there can be a table (or any object, see the API) stored in relation to the currently running thread that any function run by that thread can easily access. The table is created the first time a thread calls ratchet.thread.space() and returned every other time the thread calls it. For example:

local function get_secret_sauce()
    return "the secret sauce is: " .. ratchet.thread.space().sauce
end

function thread_func_1()
    local space = ratchet.thread.space()
    space.sauce = "foo"

    print(get_secret_sauce())
end

function thread_func_2()
    local space = ratchet.thread.space()
    space.sauce = "bar"

    print(get_secret_sauce())
end

kernel = ratchet.new(function ()
    ratchet.thread.attach(thread_func_1)
    ratchet.thread.attach(thread_func_2)
end)
kernel:loop()

Scheduling

Pause and Unpause

A thread can choose to arbitrarily pause itself indefinitely, assuming some logic in another thread will unpause it. This may be useful when one thread depends on only a portion of another thread's code and it can safely resume execution before the other thread fully completes. Pause and unpause also allows the paused thread to retrieve information, as the thread that unpauses the other can pass arguments to be returned to the other.

When pausing a thread, you should be sure another thread has a reference to it to unpause. You can get this reference from attach or in the currently running thread with ratchet.thread.self(). Then, the thread calls ratchet.thread.pause() with no arguments. This will instantly pause the thread and schedule another. When, at some point, another thread calls ratchet.thread.unpause() on the paused thread, the paused thread will resume execution on the next main loop iteration. The paused thread and the thread unpausing it must be attached to the same ratchet object.

To pass data from the thread that calls unpause() to the paused thread, simply provide extra arguments to it. These arguments will be passed, in order, as the return values to pause().

-- Thread that pauses...
t = ratchet.thread.self()
local important_data = ratchet.thread.pause()

-- Another thread that unpauses...
ratchet.thread.unpause(t, my_data)

If ratchet detects that all threads are waiting for others to unpause them and no threads are actually waiting on IO, it will throw a deadlock error.

Batch Waiting

Often a thread will need to wait for several other threads to complete before it can continue. Perhaps some data needs to be dispersed to several other services, it would be best to do them all at once and wait until the last one finishes before moving on.

Similar to pause and unpause, for a thread to wait on others it must have a reference to the Lua thread object of every thread it waits on. It then passes them all in an array table to ratchet.thread.wait_all().

When the last thread in the batch completes, the waiting thread will be resumed on the next iteration of the main loop. No return values from any threads will be saved.

children = {}
table.insert(children, ratchet.thread.attach(thread1))
table.insert(children, ratchet.thread.attach(thread2))
table.insert(children, ratchet.thread.attach(thread3))
-- etc...
ratchet.thread.wait_all(children)

Basic Timers

A thread may choose to pause for a certain time period, and may not need the extra features an advanced timer. For this, a thread need not create any special objects, it can just call ratchet.thread.timer() with the timeout given in seconds (fractions are okay, granularity will depend on the system capabilities).

-- Pause this thread for 3.5 seconds...
ratchet.thread.timer(3.5)

Alarms

Say, hypothetically, your application can hold a resource lock for no more than 60 seconds. You must complete your transaction within the duration of the lock, otherwise clean up and report an error.

A ratchet alarm is similar to the alarm() system call. The idea is, when a given number of seconds have elapsed in a thread, an error is thrown to stop the thread.

Optionally, an arbitrary function will be called prior to the error to perform any necessary cleanup. However, this function may NOT pause or call any functions that yield. Errors may be thrown inside this function, but the debug traceback may be misleading, it will indicate which function in the thread was currently executing when the alarm was thrown.

function timed_request(data)
    ratchet.thread.alarm(60.0, function ()
        data:cleanup()
    end)

    data:request_operation(13)
    data:request_operation(192)
    -- etc.
end

The error thrown by the alarm after the cleanup function is a ratchet error with the code ALARM. Using that code, these errors can be handled (or, more likely, ignored) appropriately.

Error Trapping

In applications that require a library like ratchet, it is generally not desired for an error in one thread to propagate through the entire system. The ratchet library throws and propagates errors on most types of failures, under the assumption that they will be caught and handled gracefully.

Errors in the threads themselves will propogate down to the loop() or loop_once() calls, so putting those in a pcall is one way to catch them.

kernel = ratchet.new(main_thread)

repeat
    local successful, ret = pcall(kernel.loop_once, kernel)
    if not successful then
        print("Error occurred: " .. tostring(ret))
        print("Continuing...")
    end
until successful and not ret

However, by the time an error has been caught by pcall(), the Lua stack has unwound and no useful traceback information will be available. A better alternative is to add an error handler function to ratchet.new().

function error_handler(err, thread)
    print(debug.traceback(thread, err, 2))
    os.exit(1) -- Or skip this to resume event loop.
end

kernel = ratchet.new(main_thread, error_handler)
kernel:loop()

All errors thrown by ratchet will be convertible to a human-readable string. For more details about errors thrown and how to catch them more effectively, see the Error Handling page.




Last modified:  Sun, 17 Aug 2014 09:32:32 -0400
Author:  ian.good