Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EventLoop: direct epoll/kqueue integration #14959

Closed
Show file tree
Hide file tree
Changes from 70 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
2c20b91
Fix: don't cancel timeout select action event twice
ysbaddaden Jul 16, 2024
26bbb20
Add :evloop to Crystal.trace
ysbaddaden Jul 15, 2024
558e50f
Epoll: initial attempt (doesn't compile)
ysbaddaden Jul 12, 2024
ec1f4e0
Fix: epoll_event is only packed on x86_64
ysbaddaden Jul 16, 2024
c9e4554
Fix: disable EPOLLEXCLUSIVE for now
ysbaddaden Jul 18, 2024
3c1dc8a
Fix: close in MT environment
ysbaddaden Jul 18, 2024
24c56ee
Fix: add optional Crystal::EventLoop#after_fork_before_exec (MT)
ysbaddaden Jul 18, 2024
8c186dc
Fix: after_fork (no MT) or after_fork_before_exec (MT only)
ysbaddaden Jul 18, 2024
4089895
fixup! Fix: add optional Crystal::EventLoop#after_fork_before_exec (MT)
ysbaddaden Jul 18, 2024
b7d6fec
Prefer eventfd over pipe (only one fd, smaller struct in kernel)
ysbaddaden Jul 18, 2024
3def18e
Save pointer to Node instead of fd (skips searches after wait)
ysbaddaden Jul 18, 2024
887f29c
fixup! Save pointer to Node instead of fd (skips searches after wait)
ysbaddaden Jul 18, 2024
1457032
fixup! Prefer eventfd over pipe (only one fd, smaller struct in kernel)
ysbaddaden Jul 19, 2024
f2a9b07
Add Crystal::System::EventFD abstraction
ysbaddaden Jul 19, 2024
f6fb444
Use generic :system event type instead of :interrupt
ysbaddaden Jul 19, 2024
5c048f8
fixup! Add Crystal::System::EventFD abstraction
ysbaddaden Jul 19, 2024
7c33a38
Extract timers + cleanup + one timerfd per eventloop
ysbaddaden Jul 19, 2024
4cbf6e5
Fix: also check that timers are empty (not only events)
ysbaddaden Jul 20, 2024
b835d00
Fix: missing mutex sync
ysbaddaden Jul 22, 2024
687f9df
Extract Evented::Eventloop from Epoll::EventLoop
ysbaddaden Jul 22, 2024
2c7f8a5
Extract #system_pipe from Crystal::System::FileDescriptor.pipe
ysbaddaden Jul 22, 2024
197a2a2
Add Crystal::Kqueue::EventLoop (*BSD, Darwin)
ysbaddaden Jul 22, 2024
2c6ebd9
Fix: explicit none/read/write registration for EventQueue::Node
ysbaddaden Jul 23, 2024
8185ecd
Fix: dequeue timer when io event is ready
ysbaddaden Jul 23, 2024
1db5459
fix: format + cleanup
ysbaddaden Jul 23, 2024
8d09249
Fix: pthread_mutex_unlock fails with EPERM in specs
ysbaddaden Jul 23, 2024
ff82437
Fix: cleanup sleep/select_timeout event
ysbaddaden Jul 23, 2024
b2292d5
Evented::Event#time => #wake_at
ysbaddaden Jul 23, 2024
275f2cc
Fix: Eventfd#write raises on success
ysbaddaden Jul 25, 2024
a21f362
Experiment: single EventLoop for the process
ysbaddaden Aug 21, 2024
6889332
Epoll: reenable timerfd, fix timers + review abstract API
ysbaddaden Aug 21, 2024
e8a0042
Epoll: enable EPOLLRDHUP
ysbaddaden Aug 21, 2024
ed5aae4
Kqueue: support single EventLoop per process (untested)
ysbaddaden Aug 21, 2024
e5e3381
Fix: runtime issues
ysbaddaden Aug 21, 2024
cb77996
fixup! Kqueue: support single EventLoop per process (untested)
ysbaddaden Aug 21, 2024
03667ab
Fix: Process.run hangs forever
ysbaddaden Aug 21, 2024
9a10710
cleanup
ysbaddaden Aug 21, 2024
a61cad9
Fix: broken interpreter build
ysbaddaden Aug 21, 2024
3f94142
Kqueue: must add a kevent for each filter (one read, one write)
ysbaddaden Aug 22, 2024
a2519d4
Abstract process_timers + keep lock while processing
ysbaddaden Aug 22, 2024
843cc51
Don't use evloop in Crystal::System::Process.spawn (unix)
ysbaddaden Aug 23, 2024
b432acb
Don't use evloop in Crystal::System::Process.reopen_io (unix)
ysbaddaden Aug 23, 2024
8bdb283
Kqueue: be resilient to closed stdio/pipe
ysbaddaden Aug 23, 2024
6328afc
Fix: crystal tool format
ysbaddaden Aug 23, 2024
520ef7a
fixup! Don't use evloop in Crystal::System::Process.reopen_io (unix)
ysbaddaden Aug 23, 2024
79c1c35
Fix: no need to re-create epoll/kqueue after fork before exec
ysbaddaden Aug 23, 2024
3f286a2
Fix: spare timer update after processing timers
ysbaddaden Aug 23, 2024
fd02a1d
Fix: spec failures on darwin/kqueue (blind attempt)
ysbaddaden Aug 23, 2024
5433e30
Fix: raise on fork (not supported by Crystal::Evented
ysbaddaden Aug 23, 2024
835cbe1
Fix: disable fork codegen in compiler when Crystal::Evented
ysbaddaden Aug 23, 2024
426cd73
Kqueue: compilation for OpenBSD
ysbaddaden Aug 24, 2024
d24c297
Fix: disable fork when Crystal::Evented is defined
ysbaddaden Aug 24, 2024
095bbd3
Fix: EPOLLRDHUP musn't prevent handling EPOLLOUT
ysbaddaden Aug 24, 2024
7d820a7
Revert "Fix: spec failures on darwin/kqueue (blind attempt)"
ysbaddaden Aug 24, 2024
7ef7704
Test: remove fd from epoll on finalize
ysbaddaden Aug 24, 2024
a75ed0d
CI: set verbose mode to investigate segfault
ysbaddaden Aug 24, 2024
c4a9473
Fix: allocate poll descriptors in arena + lazy registration
ysbaddaden Aug 26, 2024
4bd421c
Reenable fork since we know about registered fds
ysbaddaden Aug 26, 2024
0e15128
Fix: add arena to kqueue evloop
ysbaddaden Aug 26, 2024
f278e04
Revert "CI: set verbose mode to investigate segfault"
ysbaddaden Aug 26, 2024
187cebd
Fix: set @closed in System::FileDescriptor#file_descriptor_close
ysbaddaden Aug 29, 2024
1d84083
Add generation to evloop arena
ysbaddaden Aug 29, 2024
f80622c
Fix: compilation error on crystal 1.0.0
ysbaddaden Aug 29, 2024
c0c1cfd
Fix: add generation arena index to kqueue evloop
ysbaddaden Aug 30, 2024
8d808cc
fixup! Fix: add generation arena index to kqueue evloop
ysbaddaden Aug 30, 2024
ed41608
Fix: Crystal::SpinLock should be a struct
ysbaddaden Aug 31, 2024
8b2fd43
Fix: compilation with preview_mt
ysbaddaden Aug 31, 2024
b3bc0ed
Allow multiple Evented::EventLoop instances (with ownership transfer)
ysbaddaden Sep 2, 2024
4586892
Add :evloop_libevent to force libevent event loop
ysbaddaden Sep 2, 2024
07220b2
Fix: global arena accessor (class vars aren't inherited)
ysbaddaden Sep 3, 2024
086e0ee
Fix: close must cancel timers on the owning evloop (not the current one)
ysbaddaden Sep 3, 2024
e674803
Fix: ignore errno on epoll_del in IO finalizer
ysbaddaden Sep 3, 2024
d1d6d72
Add Crystal::EventLoop#delete to clean up on finalize
ysbaddaden Sep 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions src/channel/select/timeout_action.cr
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,11 @@ class Channel(T)
end

def time_expired(fiber : Fiber) : Nil
if @select_context.try &.try_trigger
fiber.enqueue
end
fiber.enqueue if time_expired?
end

def time_expired? : Bool
@select_context.try &.try_trigger || false
end
end
end
152 changes: 152 additions & 0 deletions src/crystal/arena.cr
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be renamed as Crystal::Evented::Arena since it's not a generic generational arena (memory region). It takes advantage that the OS kernels handle the fd number (it's guaranteed unique) and always reuse closed fd instead of growing (until it's needed).

An actual generational arena would keep a list of free indexes.

Note: the goal of the arena is to:

  • avoid repeated allocations;
  • avoid polluting the IO object with the PollDescriptor (doesn't exist in other evloops);
  • avoid saving raw pointers into kernel data structures;
  • safely detect allocation issues instead of segfaults because of raw pointers.

Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
# OPTIMIZE: can the generation help to avoid the mutation lock (atomic)?
# OPTIMIZE: consider a memory map (mmap, VirtualAlloc) with a maximum capacity
class Crystal::Arena(T)
struct Allocation(T)
property generation = 0_u32
property? allocated = false
@object = uninitialized T

def pointer : Pointer(T)
pointerof(@object)
end

def free : Nil
@generation &+= 1_u32
@allocated = false
pointer.clear(1)
end
end

@buffer : Slice(Allocation(T))

def initialize
@lock = SpinLock.new
@buffer = Pointer(Allocation(T)).malloc(32).to_slice(32)
end

private def grow_buffer(capacity)
buffer = Pointer(Allocation(T)).malloc(capacity).to_slice(capacity)
buffer.to_unsafe.copy_from(@buffer.to_unsafe, @buffer.size)
@buffer = buffer
end

# Returns a pointer to the object allocated at *gen_idx* (generation index).
# Raises if the object isn't allocated.
# Raises if the generation has changed (i.e. the object has been freed then reallocated)
# Raises if *index* is negative.
def get(gen_idx : Int64) : Pointer(T)
index, generation = from_gen_index(gen_idx)

in_bounds!(index)
allocation = @buffer.to_unsafe + index

unless allocation.value.allocated?
raise RuntimeError.new("#{self.class.name}: object not allocated at index #{index}")
end

unless (actual = allocation.value.generation) == generation
raise RuntimeError.new("#{self.class.name}: object generation changed at index #{index} (#{generation} => #{actual})")
end

allocation.value.pointer
end

# Yields and allocates the object at *index* unless already allocated.
# Returns a pointer to the object at *index* and the generation index.
#
# There are no generational checks.
# Raises if *index* is negative.
def lazy_allocate(index : Int32, &) : {Pointer(T), Int64}
# fast-path: check if already allocated
if in_bounds?(index)
allocation = @buffer.to_unsafe + index

if allocation.value.allocated?
return {allocation.value.pointer, to_gen_index(index, allocation)}
end
end

# slow-path: allocate
@lock.sync do
if index >= @buffer.size
# slowest-path: grow the buffer
grow_buffer(Math.pw2ceil(Math.max(index, @buffer.size * 2)))
end

unsafe_allocate(index) do |pointer, gen_index|
yield pointer, gen_index
end
end
end

private def unsafe_allocate(index : Int32, &) : {Pointer(T), Int64}
allocation = @buffer.to_unsafe + index
pointer = allocation.value.pointer
gen_index = to_gen_index(index, allocation)

unless allocation.value.allocated?
allocation.value.allocated = true
yield pointer, gen_index
end

{pointer, gen_index}
end

# Yields the object allocated at *index* then releases it.
# Does nothing if the object wasn't allocated.
#
# Raises if *index* is negative.
def free(index : Int32, &) : Nil
return unless in_bounds?(index)

@lock.sync do
allocation = @buffer.to_unsafe + index
return unless allocation.value.allocated?

yield allocation.value.pointer
allocation.value.free
end
end

private def in_bounds?(index : Int32) : Bool
if index.negative?
raise ArgumentError.new("#{self.class.name}: negative index #{index}")
else
index < @buffer.size
end
end

private def in_bounds!(index : Int32) : Nil
if index.negative?
raise ArgumentError.new("#{self.class.name}: negative index #{index}")
elsif index >= @buffer.size
raise IndexError.new("#{self.class.name}: out of bounds index #{index}")
end
end

# Iterates all allocated objects, yields the actual index as well as the
# generation index.
def each(&) : Nil
ptr = @buffer.to_unsafe

@buffer.size.times do |index|
allocation = ptr + index

if allocation.value.allocated?
yield index, to_gen_index(index, allocation)
end
end
end

private def to_gen_index(index : Int32, allocation : Pointer(Allocation(T))) : Int64
to_gen_index(index, allocation.value.generation)
end

private def to_gen_index(index : Int32, generation : UInt32) : Int64
(index.to_i64! << 32) | generation.to_u64!
end

private def from_gen_index(gen_index : Int64) : {Int32, UInt32}
{(gen_index >> 32).to_i32!, gen_index.to_u32!}
end
end
4 changes: 4 additions & 0 deletions src/crystal/pointer_linked_list.cr
Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,8 @@ struct Crystal::PointerLinkedList(T)
each { |node| yield node }
@head = Pointer(T).null
end

def clear : Nil
@head = Pointer(T).null
end
end
2 changes: 1 addition & 1 deletion src/crystal/spin_lock.cr
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# :nodoc:
class Crystal::SpinLock
struct Crystal::SpinLock
private UNLOCKED = 0
private LOCKED = 1

Expand Down
20 changes: 18 additions & 2 deletions src/crystal/system/event_loop.cr
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,15 @@ abstract class Crystal::EventLoop
{% if flag?(:wasi) %}
Crystal::Wasi::EventLoop.new
{% elsif flag?(:unix) %}
Crystal::LibEvent::EventLoop.new
{% if flag?(:evloop_libevent) %}
Crystal::LibEvent::EventLoop.new
{% elsif flag?(:bsd) || flag?(:darwin) %}
Crystal::Kqueue::EventLoop.new
{% elsif flag?(:linux) || flag?(:solaris) %}
Crystal::Epoll::EventLoop.new
{% else %}
Crystal::LibEvent::EventLoop.new
{% end %}
{% elsif flag?(:win32) %}
Crystal::IOCP::EventLoop.new
{% else %}
Expand Down Expand Up @@ -73,7 +81,15 @@ end
{% if flag?(:wasi) %}
require "./wasi/event_loop"
{% elsif flag?(:unix) %}
require "./unix/event_loop_libevent"
{% if flag?(:evloop_libevent) %}
require "./unix/event_loop_libevent"
{% elsif flag?(:bsd) || flag?(:darwin) %}
require "./unix/kqueue/event_loop"
{% elsif flag?(:linux) || flag?(:solaris) %}
require "./unix/epoll/event_loop"
{% else %}
require "./unix/event_loop_libevent"
{% end %}
{% elsif flag?(:win32) %}
require "./win32/event_loop_iocp"
{% else %}
Expand Down
53 changes: 53 additions & 0 deletions src/crystal/system/unix/epoll.cr
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
{% skip_file unless flag?(:linux) || flag?(:solaris) %}

require "c/sys/epoll"

struct Crystal::System::Epoll
def initialize
@epfd = LibC.epoll_create1(LibC::EPOLL_CLOEXEC)
raise RuntimeError.from_errno("epoll_create1") if @epfd == -1
end

def fd : Int32
@epfd
end

def add(fd : Int32, epoll_event : LibC::EpollEvent*) : Nil
if LibC.epoll_ctl(@epfd, LibC::EPOLL_CTL_ADD, fd, epoll_event) == -1
raise RuntimeError.from_errno("epoll_ctl(EPOLL_CTL_ADD)") unless Errno.value == Errno::EPERM
end
end

def add(fd : Int32, events : UInt32, u64 : UInt64) : Nil
epoll_event = uninitialized LibC::EpollEvent
epoll_event.events = events
epoll_event.data.u64 = u64
add(fd, pointerof(epoll_event))
end

def modify(fd : Int32, epoll_event : LibC::EpollEvent*) : Nil
if LibC.epoll_ctl(@epfd, LibC::EPOLL_CTL_MOD, fd, epoll_event) == -1
raise RuntimeError.from_errno("epoll_ctl(EPOLL_CTL_MOD)")
end
end

# OPTIMIZE: if we added a fd only when it would block (instead of immediately
# on open/accept), then maybe we could spare the errno checks for EPERM and
# ENOENT (?)
def delete(fd : Int32) : Nil
if LibC.epoll_ctl(@epfd, LibC::EPOLL_CTL_DEL, fd, nil) == -1
raise RuntimeError.from_errno("epoll_ctl(EPOLL_CTL_DEL)") unless Errno.value.in?(Errno::EPERM, Errno::ENOENT)
end
end

# `timeout` is in milliseconds; -1 will wait indefinitely; 0 will never wait.
def wait(events : Slice(LibC::EpollEvent), timeout : Int32) : Slice(LibC::EpollEvent)
count = LibC.epoll_wait(@epfd, events.to_unsafe, events.size, timeout)
raise RuntimeError.from_errno("epoll_wait") if count == -1 && Errno.value != Errno::EINTR
events[0, count.clamp(0..)]
end

def close : Nil
LibC.close(@epfd)
end
end
Loading
Loading