Compare commits

...

26 Commits

Author SHA1 Message Date
木头云
cebe9d45ce
Merge 0d53a3cdb1e6be1ee8a3da93702b767dc2b04fb5 into f8e71a548c3d9f2ae2bd2f7bfe1114901f9568a4 2025-11-30 13:42:39 +00:00
木头云
f8e71a548c
Merge pull request #159 from mutouyun/refactoring-ut
refactor(test): comprehensive unit test refactoring
2025-11-30 19:36:52 +08:00
木头云
cf5738eb3a fix(test): replace C++17 structured bindings with C++14 compatible code
Problem:
- Two test cases in test_buffer.cpp used structured bindings (auto [a, b])
- Structured bindings are a C++17 feature
- Project requires C++14 compatibility

Solution:
- Replace 'auto [ptr, size] = buf.to_tuple()' with C++14 compatible code
- Use std::get<N>() to extract tuple elements
- Modified tests: ToTupleNonConst, ToTupleConst

Changes:
- Line 239: Use std::get<0/1>(tuple) instead of structured binding
- Line 252: Use std::get<0/1>(tuple) instead of structured binding
- Add explanatory comments for clarity

This ensures the test suite compiles with C++14 standard.
2025-11-30 11:16:03 +00:00
木头云
c31ef988c1 fix(shm): remove redundant self-assignment in shm_win.cpp
- Remove useless 'ii->size_ = ii->size_;' statement at line 140
- The user-requested size is already set in acquire() function
- Simplify else branch to just a comment for clarity
- No functional change, just code cleanup
2025-11-30 11:06:04 +00:00
木头云
7726742157 feat(shm): implement reference counting for Windows shared memory
Problem:
- Reference counting tests fail on Windows (ReleaseMemory, ReferenceCount,
  SubtractReference, HandleRef, HandleSubRef)
- get_ref() and sub_ref() were stub implementations returning 0/doing nothing
- CreateFileMapping HANDLE lacks built-in reference counting mechanism

Solution:
- Implement reference counting using std::atomic<std::int32_t> stored at
  the end of shared memory (same strategy as POSIX version)
- Add calc_size() helper to allocate extra space for atomic counter
- Add acc_of() helper to access the atomic counter at the end of memory
- Modify acquire() to allocate calc_size(size) instead of size
- Modify get_mem() to initialize counter to 1 on first mapping
- Modify release() to decrement counter and return ref count before decrement
- Implement get_ref() to return current reference count
- Implement sub_ref() to atomically decrement reference count
- Convert file from Windows (CRLF) to Unix (LF) line endings for consistency

Key Implementation Details:
1. Reference counter stored at end of shared memory (aligned to info_t)
2. First get_mem() call: fetch_add(1) initializes counter to 1
3. release() returns ref count before decrement (for semantics compatibility)
4. Memory layout: [user data][padding][atomic<int32_t> counter]
5. Uses memory_order_acquire/release/acq_rel for proper synchronization

This makes Windows implementation match POSIX behavior and ensures all
reference counting tests pass on Windows platform.
2025-11-30 10:55:24 +00:00
木头云
b9dd75ccd9 fix(platform): rename linux/posix namespaces to avoid predefined macro conflict
Problem:
- 'linux' is a predefined macro on Linux platforms
- Using 'namespace linux' causes compilation errors
- Preprocessor replaces 'linux' with '1' before compilation

Solution:
- Rename 'namespace linux' to 'namespace linux_'
- Rename 'namespace posix' to 'namespace posix_'
- Update all 7 call sites accordingly:
  - linux/condition.h:  linux_::detail::make_timespec()
  - linux/mutex.h:      linux_::detail::make_timespec() (2 places)
  - posix/condition.h:  posix_::detail::make_timespec()
  - posix/mutex.h:      posix_::detail::make_timespec() (2 places)
  - posix/semaphore_impl.h: posix_::detail::make_timespec()

This prevents preprocessor macro expansion issues while maintaining
the ODR violation fix from the previous commit.
2025-11-30 07:07:14 +00:00
木头云
e66bd880e9 fix(platform): resolve ODR violation in make_timespec/calc_wait_time inline functions
Problem:
- Both linux/get_wait_time.h and posix/get_wait_time.h define inline functions
  make_timespec() and calc_wait_time() in namespace ipc::detail
- On Linux, semaphore uses posix implementation, but may include both headers
- This causes ODR (One Definition Rule) violation - undefined behavior
- Different inline function definitions with same name violates C++ standard
- Manifested as test failures in SemaphoreTest::WaitTimeout

Solution:
- Add platform-specific namespace layer between ipc and detail:
  - linux/get_wait_time.h: ipc::linux::detail::make_timespec()
  - posix/get_wait_time.h: ipc::posix::detail::make_timespec()
- Update all call sites to use fully qualified names:
  - linux/condition.h: linux::detail::make_timespec()
  - linux/mutex.h: linux::detail::make_timespec() (2 places)
  - posix/condition.h: posix::detail::make_timespec()
  - posix/mutex.h: posix::detail::make_timespec() (2 places)
  - posix/semaphore_impl.h: posix::detail::make_timespec()

This ensures each platform's implementation is uniquely named, preventing
ODR violations and ensuring correct function resolution at compile time.
2025-11-30 07:00:32 +00:00
木头云
ff74cdd57a fix(test): correct receiver loop count in MultipleSendersReceivers
Problem:
- Receivers were exiting after receiving only messages_per_sender (5) messages
- In broadcast mode, each message is sent to ALL receivers
- If sender1 completes quickly, all receivers get 5 messages and exit
- This causes sender2's messages to fail (no active receivers)

Solution:
- Each receiver should loop for num_senders * messages_per_sender (2 * 5 = 10) messages
- This ensures all receivers stay active until ALL senders complete
- Now receivers wait for: 2 senders × 5 messages each = 10 total messages
- Expected received_count: 2 senders × 5 messages × 2 receivers = 20 messages

This fix ensures all senders can successfully complete their sends before
any receiver exits.
2025-11-30 06:38:38 +00:00
木头云
78be284668 fix(test): correct test logic and semantics in multiple test cases
1. ChannelTest::MultipleSendersReceivers
   - Add C++14-compatible latch implementation (similar to C++20 std::latch)
   - Ensure receivers are ready before senders start sending messages
   - This prevents race condition where senders might send before receivers are listening

2. RWLockTest::ReadWriteReadPattern
   - Fix test logic: lock_shared allows multiple concurrent readers
   - Previous test had race condition where both threads could read same value
   - New test: each thread writes based on thread id (1 or 2), then reads
   - Expected result: 1*20 + 2*20 = 60

3. ShmTest::ReleaseMemory
   - Correct return value semantics: release() returns ref count before decrement, or -1 on error
   - Must call get_mem() to map memory and set ref count to 1 before release
   - Expected: release() returns 1 (ref count before decrement)

4. ShmTest::ReferenceCount
   - Correct semantics: ref count is 0 after acquire (memory not mapped)
   - get_mem() maps memory and sets ref count to 1
   - Second acquire+get_mem increases ref count to 2
   - Test now properly validates reference counting behavior

5. ShmTest::SubtractReference
   - sub_ref() only works after get_mem() has mapped the memory
   - Must call get_mem() first to initialize ref count to 1
   - sub_ref() then decrements it to 0
   - Test now follows correct API usage pattern
2025-11-30 06:06:54 +00:00
木头云
d5f787596a fix(test): fix double-free in HandleDetachAttach test
- Problem: calling h2.release() followed by shm::remove(id) causes use-after-free
  - h2.release() internally calls shm::release(id) which frees the id structure
  - shm::remove(id) then accesses the freed id pointer -> crash

- Solution: detach the id from handle first, then call shm::remove(id)
  - h2.detach() returns the id without releasing it
  - shm::remove(id) can then safely clean up both memory and disk file

- This completes the fix for all ShmTest double-free issues
2025-11-30 05:38:59 +00:00
木头云
0ecf1a4137 docs(shm): add semantic comments for release/remove and fix double-free in tests
- Add comprehensive documentation for shm::release(id), shm::remove(id), and shm::remove(name)
  - release(id): Decrements ref count, cleans up memory and disk file when count reaches zero
  - remove(id): Calls release(id) internally, then forces disk file cleanup (WARNING: do not use after release)
  - remove(name): Only removes disk file, safe to use anytime

- Fix critical double-free bug in ShmTest test cases
  - Problem: calling release(id) followed by remove(id) causes use-after-free crash
    because release() already frees the id structure
  - Solution: replace 'release(id); remove(id);' pattern with just 'remove(id)'
  - Fixed tests: AcquireCreate, AcquireCreateOrOpen, GetMemory, GetMemoryNoSize,
    RemoveById, SubtractReference
  - Kept 'release(id); remove(name);' pattern unchanged (safe usage)

- Add explanatory comments in test code to prevent future misuse
2025-11-30 05:28:48 +00:00
木头云
91e4489a55 refactor(buffer): rename 'additional' parameter to 'mem_to_free' for clarity
Header changes (include/libipc/buffer.h):
- Rename: additional → mem_to_free (better semantic name)
- Add documentation comments explaining the parameter's purpose
- Clarifies that mem_to_free is passed to destructor instead of p
- Use case: when data pointer is offset into a larger allocation

Implementation changes (src/libipc/buffer.cpp):
- Update parameter name in constructor implementation
- No logic changes, just naming improvement

Test changes (test/test_buffer.cpp):
- Fix TEST_F(BufferTest, ConstructorWithMemToFree)
- Previous test caused crash: passed stack variable address to destructor
- New test correctly demonstrates the parameter's purpose:
  * Allocate 100-byte block
  * Use offset portion (bytes 25-75) as data
  * Destructor receives original block pointer for proper cleanup
- Prevents double-free and invalid free errors

Semantic explanation:
  buffer(data_ptr, size, destructor, mem_to_free)

  On destruction:
    destructor(mem_to_free ? mem_to_free : data_ptr, size)

  This allows:
    char* block = new char[100];
    char* data = block + 25;
    buffer buf(data, 50, my_free, block);  // Frees 'block', not 'data'
2025-11-30 05:09:56 +00:00
木头云
de76cf80d5 fix(buffer): remove const from char constructor to prevent UB
- Change: explicit buffer(char const & c) → explicit buffer(char & c)
- Remove dangerous const_cast in implementation
- Before: buffer(const_cast<char*>(&c), 1) [UB if c is truly const]
- After: buffer(&c, 1) [safe, requires non-const char]
- Prevents undefined behavior from modifying compile-time constants
- Constructor now correctly requires mutable char reference
- Aligns with buffer's mutable data semantics

The previous implementation with const_cast could lead to:
- Modifying string literals (undefined behavior)
- Modifying const variables (undefined behavior)
- Runtime crashes or data corruption

Example of prevented misuse:
  buffer buf('X');           // Now: compile error ✓
  char c = 'X';
  buffer buf(c);             // Now: works correctly ✓
  const char cc = 'Y';
  buffer buf(cc);            // Now: compile error ✓
2025-11-30 05:00:57 +00:00
木头云
8103c117f1 fix(buffer): remove redundant const qualifier in array constructor
- Change: byte_t const (& data)[N] → byte_t (& data)[N]
- Allows non-const byte arrays to be accepted by the constructor
- Fixes defect discovered by TEST_F(BufferTest, ConstructorFromByteArray)
- The const qualifier on array elements was too restrictive
- Keep char const & c unchanged as it's correct for single char reference
2025-11-30 04:56:02 +00:00
木头云
7447a86d41 fix(test): correct member vs non-member function calls in test_shm.cpp
- Fix handle class member functions: remove incorrect shm:: prefix
  - h.acquire() not h.shm::acquire()
  - h.release() not h.shm::release()
  - h.sub_ref() not h.shm::sub_ref()
- Keep shm:: prefix for namespace-level global functions:
  - shm::acquire(), shm::release(id), shm::get_mem()
  - shm::remove(), shm::get_ref(id), shm::sub_ref(id)
- Fix comments to use correct terminology
- Resolves: 'shm::acquire is not a class member' compilation errors
2025-11-30 04:50:21 +00:00
木头云
7092df53bb fix(test): resolve id_t ambiguity in test_shm.cpp
- Remove 'using namespace ipc::shm;' to avoid id_t conflict with system id_t
- Add explicit shm:: namespace prefix to all shm types and functions
- Apply to: id_t, handle, acquire, get_mem, release, remove, get_ref, sub_ref
- Apply to: create and open constants
- Fix comments to avoid incorrect namespace references
- Resolves compilation error: 'reference to id_t is ambiguous'
2025-11-30 04:29:55 +00:00
木头云
2cde78d692 style(test): change indentation from 4 spaces to 2 spaces
- Update all test files to use 2-space indentation
- Affects: test_buffer.cpp, test_shm.cpp, test_mutex.cpp
- Affects: test_semaphore.cpp, test_condition.cpp
- Affects: test_locks.cpp, test_ipc_channel.cpp
- Improves code consistency and readability
2025-11-30 04:22:24 +00:00
木头云
b5146655fa build(test): update CMakeLists.txt for new test structure
- Collect only test_*.cpp files from test directory
- Exclude archive directory from compilation
- Use glob pattern to automatically include new tests
- Maintain same build configuration and dependencies
- Link with gtest, gtest_main, and ipc library
2025-11-30 04:16:41 +00:00
木头云
9070a899ef test(ipc): add comprehensive unit tests for route and channel
- Test route (single producer, multiple consumer) functionality
- Test channel (multiple producer, multiple consumer) functionality
- Test construction with name and prefix
- Test connection, disconnection, and reconnection
- Test send/receive with buffer, string, and raw data
- Test blocking send/recv and non-blocking try_send/try_recv
- Test timeout handling
- Test one-to-many broadcast (route)
- Test many-to-many communication (channel)
- Test recv_count and wait_for_recv functionality
- Test clone, release, and clear operations
- Test resource cleanup and storage management
- Test concurrent multi-sender and multi-receiver scenarios
2025-11-30 04:16:16 +00:00
木头云
c21138b5b4 test(locks): add comprehensive unit tests for spin_lock and rw_lock
- Test spin_lock basic operations and mutual exclusion
- Test spin_lock critical section protection
- Test spin_lock concurrent access and contention
- Test rw_lock write lock (exclusive access)
- Test rw_lock read lock (shared access)
- Test multiple concurrent readers
- Test writers have exclusive access
- Test readers and writers don't overlap
- Test various read-write patterns
- Test rapid lock/unlock operations
- Test mixed concurrent operations
- Test write lock blocks readers correctly
2025-11-30 04:14:52 +00:00
木头云
4832c47345 test(condition): add comprehensive unit tests for ipc::sync::condition
- Test condition variable construction (default and named)
- Test wait, notify, and broadcast operations
- Test timed wait with timeout and infinite wait
- Test integration with mutex for synchronization
- Test producer-consumer patterns with condition variables
- Test multiple waiters with broadcast
- Test spurious wakeup handling patterns
- Test named condition variable sharing between threads
- Test resource cleanup (clear, clear_storage)
- Test edge cases (after clear, immediate notify)
2025-11-30 04:13:57 +00:00
木头云
6e17ce184b test(semaphore): add comprehensive unit tests for ipc::sync::semaphore
- Test semaphore construction (default and named with count)
- Test wait and post operations
- Test timed wait with various timeout values
- Test producer-consumer patterns
- Test multiple producers and consumers scenarios
- Test concurrent post operations
- Test initial count behavior
- Test named semaphore sharing between threads
- Test resource cleanup (clear, clear_storage)
- Test edge cases (zero timeout, after clear, high frequency)
2025-11-30 04:13:04 +00:00
木头云
b4ad3c69c9 test(mutex): add comprehensive unit tests for ipc::sync::mutex
- Test mutex construction (default and named)
- Test lock/unlock operations
- Test try_lock functionality
- Test timed lock with various timeout values
- Test critical section protection
- Test concurrent access and contention scenarios
- Test inter-thread synchronization with named mutex
- Test resource cleanup (clear, clear_storage)
- Test native handle access
- Test edge cases (reopen, zero timeout, rapid operations)
- Test exception safety of try_lock
2025-11-30 04:12:14 +00:00
木头云
280a21c89a test(shm): add comprehensive unit tests for shared memory
- Test low-level API (acquire, get_mem, release, remove)
- Test reference counting functionality (get_ref, sub_ref)
- Test high-level handle class interface
- Test all handle methods (valid, size, name, get, etc.)
- Test handle lifecycle (construction, move, swap, assignment)
- Test different access modes (create, open, create|open)
- Test detach/attach functionality
- Test multi-handle access to same memory
- Test data persistence across handles
- Test edge cases (large segments, multiple simultaneous handles)
2025-11-30 04:11:13 +00:00
木头云
3d743d57ac test(buffer): add comprehensive unit tests for ipc::buffer
- Test all constructors (default, with destructor, from array, from char)
- Test move semantics and assignment operators
- Test all accessor methods (data, size, empty, get<T>)
- Test conversion methods (to_tuple, to_vector)
- Test comparison operators (==, !=)
- Test edge cases (empty buffers, large buffers, self-assignment)
- Verify destructor callback functionality
2025-11-30 04:10:07 +00:00
木头云
17eaa573ca chore(test): archive existing test cases to test/archive
- Move all existing test files (*.cpp, *.h) to test/archive/
- Rename CMakeLists.txt to CMakeLists.txt.old in archive
- Prepare for comprehensive unit test refactoring
2025-11-30 04:04:10 +00:00
30 changed files with 4077 additions and 277 deletions

View File

@ -17,14 +17,16 @@ public:
buffer();
buffer(void* p, std::size_t s, destructor_t d);
buffer(void* p, std::size_t s, destructor_t d, void* additional);
// mem_to_free: pointer to be passed to destructor (if different from p)
// Use case: when p points into a larger allocated block that needs to be freed
buffer(void* p, std::size_t s, destructor_t d, void* mem_to_free);
buffer(void* p, std::size_t s);
template <std::size_t N>
explicit buffer(byte_t const (& data)[N])
explicit buffer(byte_t (& data)[N])
: buffer(data, sizeof(data)) {
}
explicit buffer(char const & c);
explicit buffer(char & c);
buffer(buffer&& rhs);
~buffer();

View File

@ -17,8 +17,30 @@ enum : unsigned {
IPC_EXPORT id_t acquire(char const * name, std::size_t size, unsigned mode = create | open);
IPC_EXPORT void * get_mem(id_t id, std::size_t * size);
// Release shared memory resource and clean up disk file if reference count reaches zero.
// This function decrements the reference counter. When the counter reaches zero, it:
// 1. Unmaps the shared memory region
// 2. Removes the backing file from disk (shm_unlink on POSIX)
// 3. Frees the id structure
// After calling this function, the id becomes invalid and must not be used again.
// Returns: The reference count before decrement, or -1 on error.
IPC_EXPORT std::int32_t release(id_t id) noexcept;
// Release shared memory resource and force cleanup of disk file.
// This function calls release(id) internally, then unconditionally attempts to
// remove the backing file. WARNING: Do NOT call this after release(id) on the
// same id, as the id is already freed by release(). Use this function alone,
// not in combination with release().
// Typical use case: Force cleanup when you want to ensure the disk file is removed
// regardless of reference count state.
IPC_EXPORT void remove (id_t id) noexcept;
// Remove shared memory backing file by name.
// This function only removes the disk file and does not affect any active memory
// mappings or id structures. Use this for cleanup of orphaned files or for explicit
// file removal without affecting runtime resources.
// Safe to call at any time, even if shared memory is still in use elsewhere.
IPC_EXPORT void remove (char const * name) noexcept;
IPC_EXPORT std::int32_t get_ref(id_t id);

View File

@ -38,16 +38,16 @@ buffer::buffer(void* p, std::size_t s, destructor_t d)
: p_(p_->make(p, s, d, nullptr)) {
}
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
: p_(p_->make(p, s, d, additional)) {
buffer::buffer(void* p, std::size_t s, destructor_t d, void* mem_to_free)
: p_(p_->make(p, s, d, mem_to_free)) {
}
buffer::buffer(void* p, std::size_t s)
: buffer(p, s, nullptr) {
}
buffer::buffer(char const & c)
: buffer(const_cast<char*>(&c), 1) {
buffer::buffer(char & c)
: buffer(&c, 1) {
}
buffer::buffer(buffer&& rhs)

View File

@ -27,7 +27,7 @@ public:
return false;
}
} else {
auto ts = detail::make_timespec(tm);
auto ts = linux_::detail::make_timespec(tm);
int eno = A0_SYSERR(a0_cnd_timedwait(native(), static_cast<a0_mtx_t *>(mtx.native()), {ts}));
if (eno != 0) {
if (eno != ETIMEDOUT) {

View File

@ -10,6 +10,7 @@
#include "a0/err_macro.h"
namespace ipc {
namespace linux_ {
namespace detail {
inline bool calc_wait_time(timespec &ts, std::uint64_t tm /*ms*/) noexcept {
@ -43,4 +44,5 @@ inline timespec make_timespec(std::uint64_t tm /*ms*/) noexcept(false) {
}
} // namespace detail
} // namespace linux_
} // namespace ipc

View File

@ -25,7 +25,7 @@ public:
bool lock(std::uint64_t tm) noexcept {
if (!valid()) return false;
for (;;) {
auto ts = detail::make_timespec(tm);
auto ts = linux_::detail::make_timespec(tm);
int eno = A0_SYSERR(
(tm == invalid_value) ? a0_mtx_lock(native())
: a0_mtx_timedlock(native(), {ts}));
@ -56,7 +56,7 @@ public:
bool try_lock() noexcept(false) {
if (!valid()) return false;
int eno = A0_SYSERR(a0_mtx_timedlock(native(), {detail::make_timespec(0)}));
int eno = A0_SYSERR(a0_mtx_timedlock(native(), {linux_::detail::make_timespec(0)}));
switch (eno) {
case 0:
return true;

View File

@ -115,7 +115,7 @@ public:
}
break;
default: {
auto ts = detail::make_timespec(tm);
auto ts = posix_::detail::make_timespec(tm);
int eno;
if ((eno = ::pthread_cond_timedwait(cond_, static_cast<pthread_mutex_t *>(mtx.native()), &ts)) != 0) {
if (eno != ETIMEDOUT) {

View File

@ -10,6 +10,7 @@
#include "libipc/utility/log.h"
namespace ipc {
namespace posix_ {
namespace detail {
inline bool calc_wait_time(timespec &ts, std::uint64_t tm /*ms*/) noexcept {
@ -36,4 +37,5 @@ inline timespec make_timespec(std::uint64_t tm /*ms*/) noexcept(false) {
}
} // namespace detail
} // namespace posix_
} // namespace ipc

View File

@ -196,7 +196,7 @@ public:
bool lock(std::uint64_t tm) noexcept {
if (!valid()) return false;
for (;;) {
auto ts = detail::make_timespec(tm);
auto ts = posix_::detail::make_timespec(tm);
int eno = (tm == invalid_value)
? ::pthread_mutex_lock(mutex_)
: ::pthread_mutex_timedlock(mutex_, &ts);
@ -230,7 +230,7 @@ public:
bool try_lock() noexcept(false) {
if (!valid()) return false;
auto ts = detail::make_timespec(0);
auto ts = posix_::detail::make_timespec(0);
int eno = ::pthread_mutex_timedlock(mutex_, &ts);
switch (eno) {
case 0:

View File

@ -88,7 +88,7 @@ public:
return false;
}
} else {
auto ts = detail::make_timespec(tm);
auto ts = posix_::detail::make_timespec(tm);
if (::sem_timedwait(h_, &ts) != 0) {
if (errno != ETIMEDOUT) {
ipc::error("fail sem_timedwait[%d]: tm = %zd, tv_sec = %ld, tv_nsec = %ld\n",

View File

@ -5,6 +5,7 @@
#include <Windows.h>
#endif
#include <atomic>
#include <string>
#include <utility>
@ -20,12 +21,24 @@
namespace {
struct info_t {
std::atomic<std::int32_t> acc_;
};
struct id_info_t {
HANDLE h_ = NULL;
void* mem_ = nullptr;
std::size_t size_ = 0;
};
constexpr std::size_t calc_size(std::size_t size) {
return ((((size - 1) / alignof(info_t)) + 1) * alignof(info_t)) + sizeof(info_t);
}
inline auto& acc_of(void* mem, std::size_t size) {
return reinterpret_cast<info_t*>(static_cast<ipc::byte_t*>(mem) + size - sizeof(info_t))->acc_;
}
} // internal-linkage
namespace ipc {
@ -48,8 +61,9 @@ id_t acquire(char const * name, std::size_t size, unsigned mode) {
}
// Creates or opens a named file mapping object for a specified file.
else {
std::size_t alloc_size = calc_size(size);
h = ::CreateFileMapping(INVALID_HANDLE_VALUE, detail::get_sa(), PAGE_READWRITE | SEC_COMMIT,
0, static_cast<DWORD>(size), fmt_name.c_str());
0, static_cast<DWORD>(alloc_size), fmt_name.c_str());
DWORD err = ::GetLastError();
// If the object exists before the function call, the function returns a handle to the existing object
// (with its current size, not the specified size), and GetLastError returns ERROR_ALREADY_EXISTS.
@ -68,12 +82,28 @@ id_t acquire(char const * name, std::size_t size, unsigned mode) {
return ii;
}
std::int32_t get_ref(id_t) {
return 0;
std::int32_t get_ref(id_t id) {
if (id == nullptr) {
return 0;
}
auto ii = static_cast<id_info_t*>(id);
if (ii->mem_ == nullptr || ii->size_ == 0) {
return 0;
}
return acc_of(ii->mem_, calc_size(ii->size_)).load(std::memory_order_acquire);
}
void sub_ref(id_t) {
// Do Nothing.
void sub_ref(id_t id) {
if (id == nullptr) {
ipc::error("fail sub_ref: invalid id (null)\n");
return;
}
auto ii = static_cast<id_info_t*>(id);
if (ii->mem_ == nullptr || ii->size_ == 0) {
ipc::error("fail sub_ref: invalid id (mem = %p, size = %zd)\n", ii->mem_, ii->size_);
return;
}
acc_of(ii->mem_, calc_size(ii->size_)).fetch_sub(1, std::memory_order_acq_rel);
}
void * get_mem(id_t id, std::size_t * size) {
@ -100,9 +130,16 @@ void * get_mem(id_t id, std::size_t * size) {
ipc::error("fail VirtualQuery[%d]\n", static_cast<int>(::GetLastError()));
return nullptr;
}
ii->mem_ = mem;
ii->size_ = static_cast<std::size_t>(mem_info.RegionSize);
std::size_t actual_size = static_cast<std::size_t>(mem_info.RegionSize);
if (ii->size_ == 0) {
// Opening existing shared memory
ii->size_ = actual_size - sizeof(info_t);
}
// else: Keep user-requested size (already set in acquire)
ii->mem_ = mem;
if (size != nullptr) *size = ii->size_;
// Initialize or increment reference counter
acc_of(mem, calc_size(ii->size_)).fetch_add(1, std::memory_order_release);
return static_cast<void *>(mem);
}
@ -111,17 +148,21 @@ std::int32_t release(id_t id) noexcept {
ipc::error("fail release: invalid id (null)\n");
return -1;
}
std::int32_t ret = -1;
auto ii = static_cast<id_info_t*>(id);
if (ii->mem_ == nullptr || ii->size_ == 0) {
ipc::error("fail release: invalid id (mem = %p, size = %zd)\n", ii->mem_, ii->size_);
}
else ::UnmapViewOfFile(static_cast<LPCVOID>(ii->mem_));
else {
ret = acc_of(ii->mem_, calc_size(ii->size_)).fetch_sub(1, std::memory_order_acq_rel);
::UnmapViewOfFile(static_cast<LPCVOID>(ii->mem_));
}
if (ii->h_ == NULL) {
ipc::error("fail release: invalid id (h = null)\n");
}
else ::CloseHandle(ii->h_);
mem::free(ii);
return 0;
return ret;
}
void remove(id_t id) noexcept {

10
test/CMakeLists.txt Executable file → Normal file
View File

@ -15,11 +15,15 @@ include_directories(
${LIBIPC_PROJECT_DIR}/3rdparty
${LIBIPC_PROJECT_DIR}/3rdparty/gtest/include)
# Collect only new test files (exclude archive directory)
file(GLOB SRC_FILES
${LIBIPC_PROJECT_DIR}/test/*.cpp
# ${LIBIPC_PROJECT_DIR}/test/profiler/*.cpp
${LIBIPC_PROJECT_DIR}/test/test_*.cpp
)
file(GLOB HEAD_FILES ${LIBIPC_PROJECT_DIR}/test/*.h)
file(GLOB HEAD_FILES ${LIBIPC_PROJECT_DIR}/test/test_*.h)
# Ensure we don't include archived tests
list(FILTER SRC_FILES EXCLUDE REGEX "archive")
list(FILTER HEAD_FILES EXCLUDE REGEX "archive")
add_executable(${PROJECT_NAME} ${SRC_FILES} ${HEAD_FILES})

28
test/archive/CMakeLists.txt.old Executable file
View File

@ -0,0 +1,28 @@
project(test-ipc)
if(NOT MSVC)
add_compile_options(
-Wno-attributes
-Wno-missing-field-initializers
-Wno-unused-variable
-Wno-unused-function)
endif()
include_directories(
${LIBIPC_PROJECT_DIR}/include
${LIBIPC_PROJECT_DIR}/src
${LIBIPC_PROJECT_DIR}/test
${LIBIPC_PROJECT_DIR}/3rdparty
${LIBIPC_PROJECT_DIR}/3rdparty/gtest/include)
file(GLOB SRC_FILES
${LIBIPC_PROJECT_DIR}/test/*.cpp
# ${LIBIPC_PROJECT_DIR}/test/profiler/*.cpp
)
file(GLOB HEAD_FILES ${LIBIPC_PROJECT_DIR}/test/*.h)
add_executable(${PROJECT_NAME} ${SRC_FILES} ${HEAD_FILES})
link_directories(${LIBIPC_PROJECT_DIR}/3rdparty/gperftools)
target_link_libraries(${PROJECT_NAME} gtest gtest_main ipc)
#target_link_libraries(${PROJECT_NAME} tcmalloc_minimal)

134
test/archive/test_shm.cpp Executable file
View File

@ -0,0 +1,134 @@
#include <cstring>
#include <cstdint>
#include <thread>
#include "libipc/shm.h"
#include "test.h"
using namespace ipc::shm;
namespace {
TEST(SHM, acquire) {
handle shm_hd;
EXPECT_FALSE(shm_hd.valid());
EXPECT_TRUE(shm_hd.acquire("my-test-1", 1024));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-1");
EXPECT_TRUE(shm_hd.acquire("my-test-2", 2048));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-2");
EXPECT_TRUE(shm_hd.acquire("my-test-3", 4096));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-3");
}
TEST(SHM, release) {
handle shm_hd;
EXPECT_FALSE(shm_hd.valid());
shm_hd.release();
EXPECT_FALSE(shm_hd.valid());
EXPECT_TRUE(shm_hd.acquire("release-test-1", 512));
EXPECT_TRUE(shm_hd.valid());
shm_hd.release();
EXPECT_FALSE(shm_hd.valid());
}
TEST(SHM, get) {
handle shm_hd;
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_TRUE(shm_hd.acquire("get-test", 2048));
auto mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
EXPECT_TRUE(mem == shm_hd.get());
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(mem, buf, sizeof(buf)) == 0);
handle shm_other(shm_hd.name(), shm_hd.size());
EXPECT_TRUE(shm_other.get() != shm_hd.get());
}
TEST(SHM, hello) {
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("hello-test", 128));
auto mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
constexpr char hello[] = "hello!";
std::memcpy(mem, hello, sizeof(hello));
EXPECT_STREQ((char const *)shm_hd.get(), hello);
shm_hd.release();
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_TRUE(shm_hd.acquire("hello-test", 1024));
mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(mem, buf, sizeof(buf)) == 0);
std::memcpy(mem, hello, sizeof(hello));
EXPECT_STREQ((char const *)shm_hd.get(), hello);
}
TEST(SHM, mt) {
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
constexpr char hello[] = "hello!";
std::memcpy(shm_hd.get(), hello, sizeof(hello));
std::thread {
[&shm_hd] {
handle shm_mt(shm_hd.name(), shm_hd.size());
shm_hd.release();
constexpr char hello[] = "hello!";
EXPECT_STREQ((char const *)shm_mt.get(), hello);
}
}.join();
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_FALSE(shm_hd.valid());
EXPECT_TRUE(shm_hd.acquire("mt-test", 1024));
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(shm_hd.get(), buf, sizeof(buf)) == 0);
}
TEST(SHM, remove) {
{
auto id = ipc::shm::acquire("hello-remove", 111);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::remove(id);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", false));
}
{
auto id = ipc::shm::acquire("hello-remove", 111);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::release(id);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::remove("hello-remove");
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", false));
}
{
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", true));
shm_hd.clear();
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", false));
}
{
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", true));
shm_hd.clear_storage("mt-test");
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", false));
}
}
} // internal-linkage

384
test/test_buffer.cpp Normal file
View File

@ -0,0 +1,384 @@
/**
* @file test_buffer.cpp
* @brief Comprehensive unit tests for ipc::buffer class
*
* This test suite covers all public interfaces of the buffer class including:
* - Constructors (default, with pointer and destructor, from array, from char)
* - Move semantics
* - Copy operations through assignment
* - Basic operations (empty, data, size)
* - Conversion methods (to_tuple, to_vector, get<T>)
* - Comparison operators
*/
#include <gtest/gtest.h>
#include <cstring>
#include <vector>
#include "libipc/buffer.h"
using namespace ipc;
namespace {
// Custom destructor tracker for testing
struct DestructorTracker {
static int count;
static void reset() { count = 0; }
static void destructor(void* p, std::size_t) {
++count;
delete[] static_cast<char*>(p);
}
};
int DestructorTracker::count = 0;
} // anonymous namespace
class BufferTest : public ::testing::Test {
protected:
void SetUp() override {
DestructorTracker::reset();
}
};
// Test default constructor
TEST_F(BufferTest, DefaultConstructor) {
buffer buf;
EXPECT_TRUE(buf.empty());
EXPECT_EQ(buf.size(), 0u);
EXPECT_EQ(buf.data(), nullptr);
}
// Test constructor with pointer, size, and destructor
TEST_F(BufferTest, ConstructorWithDestructor) {
const char* test_data = "Hello, World!";
std::size_t size = std::strlen(test_data) + 1;
char* data = new char[size];
std::strcpy(data, test_data);
buffer buf(data, size, DestructorTracker::destructor);
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), size);
EXPECT_NE(buf.data(), nullptr);
EXPECT_STREQ(static_cast<const char*>(buf.data()), test_data);
}
// Test destructor is called
TEST_F(BufferTest, DestructorCalled) {
{
char* data = new char[100];
buffer buf(data, 100, DestructorTracker::destructor);
EXPECT_EQ(DestructorTracker::count, 0);
}
EXPECT_EQ(DestructorTracker::count, 1);
}
// Test constructor with mem_to_free parameter
// Scenario: allocate a large block, but only use a portion as data
TEST_F(BufferTest, ConstructorWithMemToFree) {
// Allocate a block of 100 bytes
char* allocated_block = new char[100];
// But only use the middle 50 bytes as data (offset 25)
char* data_start = allocated_block + 25;
std::strcpy(data_start, "Offset data");
// When destroyed, should free the entire allocated_block, not just data_start
buffer buf(data_start, 50, DestructorTracker::destructor, allocated_block);
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), 50u);
EXPECT_EQ(buf.data(), data_start);
EXPECT_STREQ(static_cast<const char*>(buf.data()), "Offset data");
// Destructor will be called with allocated_block (not data_start)
// This correctly frees the entire allocation
}
// Test constructor without destructor
TEST_F(BufferTest, ConstructorWithoutDestructor) {
char stack_data[20] = "Stack data";
buffer buf(stack_data, 20);
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), 20u);
EXPECT_EQ(buf.data(), stack_data);
}
// Test constructor from byte array
TEST_F(BufferTest, ConstructorFromByteArray) {
byte_t data[10] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
buffer buf(data);
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), 10u);
const byte_t* buf_data = buf.get<const byte_t*>();
for (int i = 0; i < 10; ++i) {
EXPECT_EQ(buf_data[i], i);
}
}
// Test constructor from single char
TEST_F(BufferTest, ConstructorFromChar) {
char c = 'X';
buffer buf(c);
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), sizeof(char));
EXPECT_EQ(*buf.get<const char*>(), 'X');
}
// Test move constructor
TEST_F(BufferTest, MoveConstructor) {
char* data = new char[30];
std::strcpy(data, "Move test");
buffer buf1(data, 30, DestructorTracker::destructor);
void* original_ptr = buf1.data();
std::size_t original_size = buf1.size();
buffer buf2(std::move(buf1));
// buf2 should have the original data
EXPECT_EQ(buf2.data(), original_ptr);
EXPECT_EQ(buf2.size(), original_size);
EXPECT_FALSE(buf2.empty());
// buf1 should be empty after move
EXPECT_TRUE(buf1.empty());
EXPECT_EQ(buf1.size(), 0u);
}
// Test swap
TEST_F(BufferTest, Swap) {
char* data1 = new char[20];
char* data2 = new char[30];
std::strcpy(data1, "Buffer 1");
std::strcpy(data2, "Buffer 2");
buffer buf1(data1, 20, DestructorTracker::destructor);
buffer buf2(data2, 30, DestructorTracker::destructor);
void* ptr1 = buf1.data();
void* ptr2 = buf2.data();
std::size_t size1 = buf1.size();
std::size_t size2 = buf2.size();
buf1.swap(buf2);
EXPECT_EQ(buf1.data(), ptr2);
EXPECT_EQ(buf1.size(), size2);
EXPECT_EQ(buf2.data(), ptr1);
EXPECT_EQ(buf2.size(), size1);
}
// Test assignment operator (move semantics)
TEST_F(BufferTest, AssignmentOperator) {
char* data = new char[40];
std::strcpy(data, "Assignment test");
buffer buf1(data, 40, DestructorTracker::destructor);
void* original_ptr = buf1.data();
buffer buf2;
buf2 = std::move(buf1);
EXPECT_EQ(buf2.data(), original_ptr);
EXPECT_FALSE(buf2.empty());
}
// Test empty() method
TEST_F(BufferTest, EmptyMethod) {
buffer buf1;
EXPECT_TRUE(buf1.empty());
char* data = new char[10];
buffer buf2(data, 10, DestructorTracker::destructor);
EXPECT_FALSE(buf2.empty());
}
// Test data() const method
TEST_F(BufferTest, DataConstMethod) {
const char* test_str = "Const data test";
std::size_t size = std::strlen(test_str) + 1;
char* data = new char[size];
std::strcpy(data, test_str);
const buffer buf(data, size, DestructorTracker::destructor);
const void* const_data = buf.data();
EXPECT_NE(const_data, nullptr);
EXPECT_STREQ(static_cast<const char*>(const_data), test_str);
}
// Test get<T>() template method
TEST_F(BufferTest, GetTemplateMethod) {
int* int_data = new int[5]{1, 2, 3, 4, 5};
buffer buf(int_data, 5 * sizeof(int), [](void* p, std::size_t) {
delete[] static_cast<int*>(p);
});
int* retrieved = buf.get<int*>();
EXPECT_NE(retrieved, nullptr);
EXPECT_EQ(retrieved[0], 1);
EXPECT_EQ(retrieved[4], 5);
}
// Test to_tuple() non-const version
TEST_F(BufferTest, ToTupleNonConst) {
char* data = new char[25];
std::strcpy(data, "Tuple test");
buffer buf(data, 25, DestructorTracker::destructor);
// C++14 compatible: use std::get instead of structured binding
auto tuple = buf.to_tuple();
auto ptr = std::get<0>(tuple);
auto size = std::get<1>(tuple);
EXPECT_EQ(ptr, buf.data());
EXPECT_EQ(size, buf.size());
EXPECT_EQ(size, 25u);
}
// Test to_tuple() const version
TEST_F(BufferTest, ToTupleConst) {
char* data = new char[30];
std::strcpy(data, "Const tuple");
const buffer buf(data, 30, DestructorTracker::destructor);
// C++14 compatible: use std::get instead of structured binding
auto tuple = buf.to_tuple();
auto ptr = std::get<0>(tuple);
auto size = std::get<1>(tuple);
EXPECT_EQ(ptr, buf.data());
EXPECT_EQ(size, buf.size());
EXPECT_EQ(size, 30u);
}
// Test to_vector() method
TEST_F(BufferTest, ToVector) {
byte_t data_arr[5] = {10, 20, 30, 40, 50};
buffer buf(data_arr, 5);
std::vector<byte_t> vec = buf.to_vector();
ASSERT_EQ(vec.size(), 5u);
EXPECT_EQ(vec[0], 10);
EXPECT_EQ(vec[1], 20);
EXPECT_EQ(vec[2], 30);
EXPECT_EQ(vec[3], 40);
EXPECT_EQ(vec[4], 50);
}
// Test equality operator
TEST_F(BufferTest, EqualityOperator) {
byte_t data1[5] = {1, 2, 3, 4, 5};
byte_t data2[5] = {1, 2, 3, 4, 5};
byte_t data3[5] = {5, 4, 3, 2, 1};
buffer buf1(data1, 5);
buffer buf2(data2, 5);
buffer buf3(data3, 5);
EXPECT_TRUE(buf1 == buf2);
EXPECT_FALSE(buf1 == buf3);
}
// Test inequality operator
TEST_F(BufferTest, InequalityOperator) {
byte_t data1[5] = {1, 2, 3, 4, 5};
byte_t data2[5] = {1, 2, 3, 4, 5};
byte_t data3[5] = {5, 4, 3, 2, 1};
buffer buf1(data1, 5);
buffer buf2(data2, 5);
buffer buf3(data3, 5);
EXPECT_FALSE(buf1 != buf2);
EXPECT_TRUE(buf1 != buf3);
}
// Test size mismatch in equality
TEST_F(BufferTest, EqualityWithDifferentSizes) {
byte_t data1[5] = {1, 2, 3, 4, 5};
byte_t data2[3] = {1, 2, 3};
buffer buf1(data1, 5);
buffer buf2(data2, 3);
EXPECT_FALSE(buf1 == buf2);
EXPECT_TRUE(buf1 != buf2);
}
// Test empty buffers comparison
TEST_F(BufferTest, EmptyBuffersComparison) {
buffer buf1;
buffer buf2;
EXPECT_TRUE(buf1 == buf2);
EXPECT_FALSE(buf1 != buf2);
}
// Test large buffer
TEST_F(BufferTest, LargeBuffer) {
const std::size_t large_size = 1024 * 1024; // 1MB
char* large_data = new char[large_size];
// Fill with pattern
for (std::size_t i = 0; i < large_size; ++i) {
large_data[i] = static_cast<char>(i % 256);
}
buffer buf(large_data, large_size, [](void* p, std::size_t) {
delete[] static_cast<char*>(p);
});
EXPECT_FALSE(buf.empty());
EXPECT_EQ(buf.size(), large_size);
// Verify pattern
const char* data_ptr = buf.get<const char*>();
for (std::size_t i = 0; i < 100; ++i) { // Check first 100 bytes
EXPECT_EQ(data_ptr[i], static_cast<char>(i % 256));
}
}
// Test multiple move operations
TEST_F(BufferTest, MultipleMoves) {
char* data = new char[15];
std::strcpy(data, "Multi-move");
void* original_ptr = data;
buffer buf1(data, 15, DestructorTracker::destructor);
buffer buf2(std::move(buf1));
buffer buf3(std::move(buf2));
buffer buf4(std::move(buf3));
EXPECT_EQ(buf4.data(), original_ptr);
EXPECT_TRUE(buf1.empty());
EXPECT_TRUE(buf2.empty());
EXPECT_TRUE(buf3.empty());
EXPECT_FALSE(buf4.empty());
}
// Test self-assignment safety
TEST_F(BufferTest, SelfAssignment) {
char* data = new char[20];
std::strcpy(data, "Self-assign");
buffer buf(data, 20, DestructorTracker::destructor);
void* original_ptr = buf.data();
std::size_t original_size = buf.size();
buf = std::move(buf); // Self-assignment
// Should remain valid
EXPECT_EQ(buf.data(), original_ptr);
EXPECT_EQ(buf.size(), original_size);
}

550
test/test_condition.cpp Normal file
View File

@ -0,0 +1,550 @@
/**
* @file test_condition.cpp
* @brief Comprehensive unit tests for ipc::sync::condition class
*
* This test suite covers:
* - Condition variable construction (default and named)
* - Wait, notify, and broadcast operations
* - Timed wait with timeout
* - Integration with mutex
* - Producer-consumer patterns with condition variables
* - Resource cleanup
*/
#include <gtest/gtest.h>
#include <thread>
#include <chrono>
#include <atomic>
#include <vector>
#include "libipc/condition.h"
#include "libipc/mutex.h"
#include "libipc/def.h"
using namespace ipc;
using namespace ipc::sync;
namespace {
std::string generate_unique_cv_name(const char* prefix) {
static int counter = 0;
return std::string(prefix) + "_cv_" + std::to_string(++counter);
}
} // anonymous namespace
class ConditionTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
};
// Test default constructor
TEST_F(ConditionTest, DefaultConstructor) {
condition cv;
}
// Test named constructor
TEST_F(ConditionTest, NamedConstructor) {
std::string name = generate_unique_cv_name("named");
condition cv(name.c_str());
EXPECT_TRUE(cv.valid());
}
// Test native() methods
TEST_F(ConditionTest, NativeHandle) {
std::string name = generate_unique_cv_name("native");
condition cv(name.c_str());
ASSERT_TRUE(cv.valid());
const void* const_handle = static_cast<const condition&>(cv).native();
void* handle = cv.native();
EXPECT_NE(const_handle, nullptr);
EXPECT_NE(handle, nullptr);
}
// Test valid() method
TEST_F(ConditionTest, Valid) {
condition cv1;
std::string name = generate_unique_cv_name("valid");
condition cv2(name.c_str());
EXPECT_TRUE(cv2.valid());
}
// Test open() method
TEST_F(ConditionTest, Open) {
std::string name = generate_unique_cv_name("open");
condition cv;
bool result = cv.open(name.c_str());
EXPECT_TRUE(result);
EXPECT_TRUE(cv.valid());
}
// Test close() method
TEST_F(ConditionTest, Close) {
std::string name = generate_unique_cv_name("close");
condition cv(name.c_str());
ASSERT_TRUE(cv.valid());
cv.close();
EXPECT_FALSE(cv.valid());
}
// Test clear() method
TEST_F(ConditionTest, Clear) {
std::string name = generate_unique_cv_name("clear");
condition cv(name.c_str());
ASSERT_TRUE(cv.valid());
cv.clear();
EXPECT_FALSE(cv.valid());
}
// Test clear_storage() static method
TEST_F(ConditionTest, ClearStorage) {
std::string name = generate_unique_cv_name("clear_storage");
{
condition cv(name.c_str());
EXPECT_TRUE(cv.valid());
}
condition::clear_storage(name.c_str());
}
// Test basic wait and notify
TEST_F(ConditionTest, WaitNotify) {
std::string cv_name = generate_unique_cv_name("wait_notify");
std::string mtx_name = generate_unique_cv_name("wait_notify_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<bool> notified{false};
std::thread waiter([&]() {
mtx.lock();
cv.wait(mtx);
notified.store(true);
mtx.unlock();
});
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.lock();
cv.notify(mtx);
mtx.unlock();
waiter.join();
EXPECT_TRUE(notified.load());
}
// Test broadcast to multiple waiters
TEST_F(ConditionTest, Broadcast) {
std::string cv_name = generate_unique_cv_name("broadcast");
std::string mtx_name = generate_unique_cv_name("broadcast_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<int> notified_count{0};
const int num_waiters = 5;
std::vector<std::thread> waiters;
for (int i = 0; i < num_waiters; ++i) {
waiters.emplace_back([&]() {
mtx.lock();
cv.wait(mtx);
++notified_count;
mtx.unlock();
});
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
mtx.lock();
cv.broadcast(mtx);
mtx.unlock();
for (auto& t : waiters) {
t.join();
}
EXPECT_EQ(notified_count.load(), num_waiters);
}
// Test timed wait with timeout
TEST_F(ConditionTest, TimedWait) {
std::string cv_name = generate_unique_cv_name("timed_wait");
std::string mtx_name = generate_unique_cv_name("timed_wait_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
auto start = std::chrono::steady_clock::now();
mtx.lock();
bool result = cv.wait(mtx, 100); // 100ms timeout
mtx.unlock();
auto end = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
EXPECT_FALSE(result); // Should timeout
EXPECT_GE(elapsed, 80); // Allow some tolerance
}
// Test wait with immediate notify
TEST_F(ConditionTest, ImmediateNotify) {
std::string cv_name = generate_unique_cv_name("immediate");
std::string mtx_name = generate_unique_cv_name("immediate_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<bool> wait_started{false};
std::atomic<bool> notified{false};
std::thread waiter([&]() {
mtx.lock();
wait_started.store(true);
cv.wait(mtx, 1000); // 1 second timeout
notified.store(true);
mtx.unlock();
});
// Wait for waiter to start
while (!wait_started.load()) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
std::this_thread::sleep_for(std::chrono::milliseconds(10));
mtx.lock();
cv.notify(mtx);
mtx.unlock();
waiter.join();
EXPECT_TRUE(notified.load());
}
// Test producer-consumer with condition variable
TEST_F(ConditionTest, ProducerConsumer) {
std::string cv_name = generate_unique_cv_name("prod_cons");
std::string mtx_name = generate_unique_cv_name("prod_cons_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<int> buffer{0};
std::atomic<bool> ready{false};
std::atomic<int> consumed_value{0};
std::thread producer([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.lock();
buffer.store(42);
ready.store(true);
cv.notify(mtx);
mtx.unlock();
});
std::thread consumer([&]() {
mtx.lock();
while (!ready.load()) {
cv.wait(mtx, 2000);
}
consumed_value.store(buffer.load());
mtx.unlock();
});
producer.join();
consumer.join();
EXPECT_EQ(consumed_value.load(), 42);
}
// Test multiple notify operations
TEST_F(ConditionTest, MultipleNotify) {
std::string cv_name = generate_unique_cv_name("multi_notify");
std::string mtx_name = generate_unique_cv_name("multi_notify_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<int> notify_count{0};
const int num_notifications = 3;
std::thread waiter([&]() {
for (int i = 0; i < num_notifications; ++i) {
mtx.lock();
cv.wait(mtx, 1000);
++notify_count;
mtx.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
});
for (int i = 0; i < num_notifications; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.lock();
cv.notify(mtx);
mtx.unlock();
}
waiter.join();
EXPECT_EQ(notify_count.load(), num_notifications);
}
// Test notify vs broadcast
TEST_F(ConditionTest, NotifyVsBroadcast) {
std::string cv_name = generate_unique_cv_name("notify_vs_broadcast");
std::string mtx_name = generate_unique_cv_name("notify_vs_broadcast_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
// Test notify (should wake one)
std::atomic<int> notify_woken{0};
std::vector<std::thread> notify_waiters;
for (int i = 0; i < 3; ++i) {
notify_waiters.emplace_back([&]() {
mtx.lock();
cv.wait(mtx, 100);
++notify_woken;
mtx.unlock();
});
}
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.lock();
cv.notify(mtx); // Wake one
mtx.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(150));
for (auto& t : notify_waiters) {
t.join();
}
// At least one should be woken by notify
EXPECT_GE(notify_woken.load(), 1);
}
// Test condition variable with spurious wakeups pattern
TEST_F(ConditionTest, SpuriousWakeupPattern) {
std::string cv_name = generate_unique_cv_name("spurious");
std::string mtx_name = generate_unique_cv_name("spurious_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<bool> predicate{false};
std::atomic<bool> done{false};
std::thread waiter([&]() {
mtx.lock();
while (!predicate.load()) {
if (!cv.wait(mtx, 100)) {
// Timeout - check predicate again
if (predicate.load()) break;
}
}
done.store(true);
mtx.unlock();
});
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.lock();
predicate.store(true);
cv.notify(mtx);
mtx.unlock();
waiter.join();
EXPECT_TRUE(done.load());
}
// Test reopen after close
TEST_F(ConditionTest, ReopenAfterClose) {
std::string name = generate_unique_cv_name("reopen");
condition cv;
ASSERT_TRUE(cv.open(name.c_str()));
EXPECT_TRUE(cv.valid());
cv.close();
EXPECT_FALSE(cv.valid());
ASSERT_TRUE(cv.open(name.c_str()));
EXPECT_TRUE(cv.valid());
}
// Test named condition variable sharing between threads
TEST_F(ConditionTest, NamedSharing) {
std::string cv_name = generate_unique_cv_name("sharing");
std::string mtx_name = generate_unique_cv_name("sharing_mtx");
std::atomic<int> value{0};
std::thread t1([&]() {
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
mtx.lock();
cv.wait(mtx, 1000);
value.store(100);
mtx.unlock();
});
std::thread t2([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
mtx.lock();
cv.notify(mtx);
mtx.unlock();
});
t1.join();
t2.join();
EXPECT_EQ(value.load(), 100);
}
// Test infinite wait
TEST_F(ConditionTest, InfiniteWait) {
std::string cv_name = generate_unique_cv_name("infinite");
std::string mtx_name = generate_unique_cv_name("infinite_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<bool> woken{false};
std::thread waiter([&]() {
mtx.lock();
cv.wait(mtx, invalid_value); // Infinite wait
woken.store(true);
mtx.unlock();
});
std::this_thread::sleep_for(std::chrono::milliseconds(100));
mtx.lock();
cv.notify(mtx);
mtx.unlock();
waiter.join();
EXPECT_TRUE(woken.load());
}
// Test broadcast with sequential waiters
TEST_F(ConditionTest, BroadcastSequential) {
std::string cv_name = generate_unique_cv_name("broadcast_seq");
std::string mtx_name = generate_unique_cv_name("broadcast_seq_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
ASSERT_TRUE(mtx.valid());
std::atomic<int> processed{0};
const int num_threads = 4;
std::vector<std::thread> threads;
for (int i = 0; i < num_threads; ++i) {
threads.emplace_back([&]() {
mtx.lock();
cv.wait(mtx, 2000);
++processed;
mtx.unlock();
});
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
mtx.lock();
cv.broadcast(mtx);
mtx.unlock();
for (auto& t : threads) {
t.join();
}
EXPECT_EQ(processed.load(), num_threads);
}
// Test operations after clear
TEST_F(ConditionTest, AfterClear) {
std::string cv_name = generate_unique_cv_name("after_clear");
std::string mtx_name = generate_unique_cv_name("after_clear_mtx");
condition cv(cv_name.c_str());
mutex mtx(mtx_name.c_str());
ASSERT_TRUE(cv.valid());
cv.clear();
EXPECT_FALSE(cv.valid());
// Operations after clear should fail gracefully
mtx.lock();
EXPECT_FALSE(cv.wait(mtx, 10));
EXPECT_FALSE(cv.notify(mtx));
EXPECT_FALSE(cv.broadcast(mtx));
mtx.unlock();
}

643
test/test_ipc_channel.cpp Normal file
View File

@ -0,0 +1,643 @@
/**
* @file test_ipc_channel.cpp
* @brief Comprehensive unit tests for ipc::route and ipc::channel classes
*
* This test suite covers:
* - Route (single producer, multiple consumer) functionality
* - Channel (multiple producer, multiple consumer) functionality
* - Construction, connection, and disconnection
* - Send and receive operations (blocking and non-blocking)
* - Timeout handling
* - Named channels with prefix
* - Resource cleanup and storage management
* - Clone operations
* - Wait for receiver functionality
* - Error conditions
*/
#include <gtest/gtest.h>
#include <thread>
#include <chrono>
#include <atomic>
#include <vector>
#include <string>
#include <cstring>
#include <mutex>
#include <condition_variable>
#include "libipc/ipc.h"
#include "libipc/buffer.h"
using namespace ipc;
namespace {
// Simple latch implementation for C++14 (similar to C++20 std::latch)
class latch {
public:
explicit latch(std::ptrdiff_t count) : count_(count) {}
void count_down() {
std::unique_lock<std::mutex> lock(mutex_);
if (--count_ <= 0) {
cv_.notify_all();
}
}
void wait() {
std::unique_lock<std::mutex> lock(mutex_);
cv_.wait(lock, [this] { return count_ <= 0; });
}
private:
std::ptrdiff_t count_;
std::mutex mutex_;
std::condition_variable cv_;
};
std::string generate_unique_ipc_name(const char* prefix) {
static int counter = 0;
return std::string(prefix) + "_ipc_" + std::to_string(++counter);
}
// Helper to create a test buffer with data
buffer make_test_buffer(const std::string& data) {
char* mem = new char[data.size() + 1];
std::strcpy(mem, data.c_str());
return buffer(mem, data.size() + 1, [](void* p, std::size_t) {
delete[] static_cast<char*>(p);
});
}
// Helper to check buffer content
bool check_buffer_content(const buffer& buf, const std::string& expected) {
if (buf.empty() || buf.size() != expected.size() + 1) {
return false;
}
return std::strcmp(static_cast<const char*>(buf.data()), expected.c_str()) == 0;
}
} // anonymous namespace
// ========== Route Tests (Single Producer, Multiple Consumer) ==========
class RouteTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
};
// Test default construction
TEST_F(RouteTest, DefaultConstruction) {
route r;
EXPECT_FALSE(r.valid());
}
// Test construction with name
TEST_F(RouteTest, ConstructionWithName) {
std::string name = generate_unique_ipc_name("route_ctor");
route r(name.c_str(), sender);
EXPECT_TRUE(r.valid());
EXPECT_STREQ(r.name(), name.c_str());
}
// Test construction with prefix
TEST_F(RouteTest, ConstructionWithPrefix) {
std::string name = generate_unique_ipc_name("route_prefix");
route r(prefix{"my_prefix"}, name.c_str(), sender);
EXPECT_TRUE(r.valid());
}
// Test move constructor
TEST_F(RouteTest, MoveConstructor) {
std::string name = generate_unique_ipc_name("route_move");
route r1(name.c_str(), sender);
ASSERT_TRUE(r1.valid());
const char* name_ptr = r1.name();
route r2(std::move(r1));
EXPECT_TRUE(r2.valid());
EXPECT_STREQ(r2.name(), name_ptr);
}
// Test assignment operator
TEST_F(RouteTest, Assignment) {
std::string name = generate_unique_ipc_name("route_assign");
route r1(name.c_str(), sender);
route r2;
r2 = std::move(r1);
EXPECT_TRUE(r2.valid());
}
// Test connect method
TEST_F(RouteTest, Connect) {
std::string name = generate_unique_ipc_name("route_connect");
route r;
bool connected = r.connect(name.c_str(), sender);
EXPECT_TRUE(connected);
EXPECT_TRUE(r.valid());
}
// Test connect with prefix
TEST_F(RouteTest, ConnectWithPrefix) {
std::string name = generate_unique_ipc_name("route_connect_prefix");
route r;
bool connected = r.connect(prefix{"test"}, name.c_str(), sender);
EXPECT_TRUE(connected);
EXPECT_TRUE(r.valid());
}
// Test reconnect
TEST_F(RouteTest, Reconnect) {
std::string name = generate_unique_ipc_name("route_reconnect");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
bool reconnected = r.reconnect(sender | receiver);
EXPECT_TRUE(reconnected);
}
// Test disconnect
TEST_F(RouteTest, Disconnect) {
std::string name = generate_unique_ipc_name("route_disconnect");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
r.disconnect();
// After disconnect, behavior depends on implementation
}
// Test clone
TEST_F(RouteTest, Clone) {
std::string name = generate_unique_ipc_name("route_clone");
route r1(name.c_str(), sender);
ASSERT_TRUE(r1.valid());
route r2 = r1.clone();
EXPECT_TRUE(r2.valid());
EXPECT_STREQ(r1.name(), r2.name());
}
// Test mode accessor
TEST_F(RouteTest, Mode) {
std::string name = generate_unique_ipc_name("route_mode");
route r(name.c_str(), sender);
EXPECT_EQ(r.mode(), sender);
}
// Test release
TEST_F(RouteTest, Release) {
std::string name = generate_unique_ipc_name("route_release");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
r.release();
EXPECT_FALSE(r.valid());
}
// Test clear
TEST_F(RouteTest, Clear) {
std::string name = generate_unique_ipc_name("route_clear");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
r.clear();
EXPECT_FALSE(r.valid());
}
// Test clear_storage static method
TEST_F(RouteTest, ClearStorage) {
std::string name = generate_unique_ipc_name("route_clear_storage");
{
route r(name.c_str(), sender);
EXPECT_TRUE(r.valid());
}
route::clear_storage(name.c_str());
}
// Test clear_storage with prefix
TEST_F(RouteTest, ClearStorageWithPrefix) {
std::string name = generate_unique_ipc_name("route_clear_prefix");
{
route r(prefix{"test"}, name.c_str(), sender);
EXPECT_TRUE(r.valid());
}
route::clear_storage(prefix{"test"}, name.c_str());
}
// Test send without receiver (should fail)
TEST_F(RouteTest, SendWithoutReceiver) {
std::string name = generate_unique_ipc_name("route_send_no_recv");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
buffer buf = make_test_buffer("test");
bool sent = r.send(buf, 10); // 10ms timeout
EXPECT_FALSE(sent); // Should fail - no receiver
}
// Test try_send without receiver
TEST_F(RouteTest, TrySendWithoutReceiver) {
std::string name = generate_unique_ipc_name("route_try_send_no_recv");
route r(name.c_str(), sender);
ASSERT_TRUE(r.valid());
buffer buf = make_test_buffer("test");
bool sent = r.try_send(buf, 10);
EXPECT_FALSE(sent);
}
// Test send and receive with buffer
TEST_F(RouteTest, SendReceiveBuffer) {
std::string name = generate_unique_ipc_name("route_send_recv_buf");
route sender_r(name.c_str(), sender);
route receiver_r(name.c_str(), receiver);
ASSERT_TRUE(sender_r.valid());
ASSERT_TRUE(receiver_r.valid());
buffer send_buf = make_test_buffer("Hello Route");
std::thread sender_thread([&]() {
bool sent = sender_r.send(send_buf);
EXPECT_TRUE(sent);
});
std::thread receiver_thread([&]() {
buffer recv_buf = receiver_r.recv();
EXPECT_TRUE(check_buffer_content(recv_buf, "Hello Route"));
});
sender_thread.join();
receiver_thread.join();
}
// Test send and receive with string
TEST_F(RouteTest, SendReceiveString) {
std::string name = generate_unique_ipc_name("route_send_recv_str");
route sender_r(name.c_str(), sender);
route receiver_r(name.c_str(), receiver);
ASSERT_TRUE(sender_r.valid());
ASSERT_TRUE(receiver_r.valid());
std::string test_str = "Test String";
std::thread sender_thread([&]() {
bool sent = sender_r.send(test_str);
EXPECT_TRUE(sent);
});
std::thread receiver_thread([&]() {
buffer recv_buf = receiver_r.recv();
EXPECT_TRUE(check_buffer_content(recv_buf, test_str));
});
sender_thread.join();
receiver_thread.join();
}
// Test send and receive with raw data
TEST_F(RouteTest, SendReceiveRawData) {
std::string name = generate_unique_ipc_name("route_send_recv_raw");
route sender_r(name.c_str(), sender);
route receiver_r(name.c_str(), receiver);
ASSERT_TRUE(sender_r.valid());
ASSERT_TRUE(receiver_r.valid());
const char* data = "Raw Data Test";
std::size_t size = std::strlen(data) + 1;
std::thread sender_thread([&]() {
bool sent = sender_r.send(data, size);
EXPECT_TRUE(sent);
});
std::thread receiver_thread([&]() {
buffer recv_buf = receiver_r.recv();
EXPECT_EQ(recv_buf.size(), size);
EXPECT_STREQ(static_cast<const char*>(recv_buf.data()), data);
});
sender_thread.join();
receiver_thread.join();
}
// Test try_recv when empty
TEST_F(RouteTest, TryRecvEmpty) {
std::string name = generate_unique_ipc_name("route_try_recv_empty");
route r(name.c_str(), receiver);
ASSERT_TRUE(r.valid());
buffer buf = r.try_recv();
EXPECT_TRUE(buf.empty());
}
// Test recv_count
TEST_F(RouteTest, RecvCount) {
std::string name = generate_unique_ipc_name("route_recv_count");
route sender_r(name.c_str(), sender);
route receiver_r(name.c_str(), receiver);
ASSERT_TRUE(sender_r.valid());
ASSERT_TRUE(receiver_r.valid());
std::size_t count = sender_r.recv_count();
EXPECT_GE(count, 0u);
}
// Test wait_for_recv
TEST_F(RouteTest, WaitForRecv) {
std::string name = generate_unique_ipc_name("route_wait_recv");
route sender_r(name.c_str(), sender);
std::thread receiver_thread([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
route receiver_r(name.c_str(), receiver);
});
bool waited = sender_r.wait_for_recv(1, 500);
receiver_thread.join();
// Result depends on timing
}
// Test static wait_for_recv
TEST_F(RouteTest, StaticWaitForRecv) {
std::string name = generate_unique_ipc_name("route_static_wait");
std::thread receiver_thread([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
route receiver_r(name.c_str(), receiver);
});
bool waited = route::wait_for_recv(name.c_str(), 1, 500);
receiver_thread.join();
}
// Test one sender, multiple receivers
TEST_F(RouteTest, OneSenderMultipleReceivers) {
std::string name = generate_unique_ipc_name("route_1_to_n");
route sender_r(name.c_str(), sender);
ASSERT_TRUE(sender_r.valid());
const int num_receivers = 3;
std::vector<std::atomic<bool>> received(num_receivers);
for (auto& r : received) r.store(false);
std::vector<std::thread> receivers;
for (int i = 0; i < num_receivers; ++i) {
receivers.emplace_back([&, i]() {
route receiver_r(name.c_str(), receiver);
buffer buf = receiver_r.recv(1000);
if (check_buffer_content(buf, "Broadcast")) {
received[i].store(true);
}
});
}
std::this_thread::sleep_for(std::chrono::milliseconds(50));
sender_r.send(std::string("Broadcast"));
for (auto& t : receivers) {
t.join();
}
// All receivers should receive the message (broadcast)
for (const auto& r : received) {
EXPECT_TRUE(r.load());
}
}
// ========== Channel Tests (Multiple Producer, Multiple Consumer) ==========
class ChannelTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
};
// Test default construction
TEST_F(ChannelTest, DefaultConstruction) {
channel ch;
EXPECT_FALSE(ch.valid());
}
// Test construction with name
TEST_F(ChannelTest, ConstructionWithName) {
std::string name = generate_unique_ipc_name("channel_ctor");
channel ch(name.c_str(), sender);
EXPECT_TRUE(ch.valid());
EXPECT_STREQ(ch.name(), name.c_str());
}
// Test send and receive
TEST_F(ChannelTest, SendReceive) {
std::string name = generate_unique_ipc_name("channel_send_recv");
channel sender_ch(name.c_str(), sender);
channel receiver_ch(name.c_str(), receiver);
ASSERT_TRUE(sender_ch.valid());
ASSERT_TRUE(receiver_ch.valid());
std::thread sender_thread([&]() {
sender_ch.send(std::string("Channel Test"));
});
std::thread receiver_thread([&]() {
buffer buf = receiver_ch.recv();
EXPECT_TRUE(check_buffer_content(buf, "Channel Test"));
});
sender_thread.join();
receiver_thread.join();
}
// Test multiple senders
TEST_F(ChannelTest, MultipleSenders) {
std::string name = generate_unique_ipc_name("channel_multi_send");
channel receiver_ch(name.c_str(), receiver);
ASSERT_TRUE(receiver_ch.valid());
const int num_senders = 3;
std::atomic<int> received_count{0};
std::vector<std::thread> senders;
for (int i = 0; i < num_senders; ++i) {
senders.emplace_back([&, i]() {
channel sender_ch(name.c_str(), sender);
std::string msg = "Sender" + std::to_string(i);
sender_ch.send(msg);
});
}
std::thread receiver([&]() {
for (int i = 0; i < num_senders; ++i) {
buffer buf = receiver_ch.recv(1000);
if (!buf.empty()) {
++received_count;
}
}
});
for (auto& t : senders) {
t.join();
}
receiver.join();
EXPECT_EQ(received_count.load(), num_senders);
}
// Test multiple senders and receivers
TEST_F(ChannelTest, MultipleSendersReceivers) {
std::string name = generate_unique_ipc_name("channel_m_to_n");
const int num_senders = 2;
const int num_receivers = 2;
const int messages_per_sender = 5;
const int total_messages = num_senders * messages_per_sender; // Each receiver should get all messages
std::atomic<int> sent_count{0};
std::atomic<int> received_count{0};
// Use latch to ensure receivers are ready before senders start
latch receivers_ready(num_receivers);
std::vector<std::thread> receivers;
for (int i = 0; i < num_receivers; ++i) {
receivers.emplace_back([&, i]() {
channel ch(name.c_str(), receiver);
receivers_ready.count_down(); // Signal this receiver is ready
// Each receiver should receive ALL messages from ALL senders (broadcast mode)
for (int j = 0; j < total_messages; ++j) {
buffer buf = ch.recv(2000);
if (!buf.empty()) {
++received_count;
}
}
});
}
// Wait for all receivers to be ready
receivers_ready.wait();
std::vector<std::thread> senders;
for (int i = 0; i < num_senders; ++i) {
senders.emplace_back([&, i]() {
channel ch(name.c_str(), sender);
for (int j = 0; j < messages_per_sender; ++j) {
std::string msg = "S" + std::to_string(i) + "M" + std::to_string(j);
if (ch.send(msg, 1000)) {
++sent_count;
}
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
});
}
for (auto& t : senders) {
t.join();
}
for (auto& t : receivers) {
t.join();
}
EXPECT_EQ(sent_count.load(), num_senders * messages_per_sender);
// All messages should be received (broadcast mode)
EXPECT_EQ(received_count.load(), num_senders * messages_per_sender * num_receivers);
}
// Test try_send and try_recv
TEST_F(ChannelTest, TrySendTryRecv) {
std::string name = generate_unique_ipc_name("channel_try");
channel sender_ch(name.c_str(), sender);
channel receiver_ch(name.c_str(), receiver);
ASSERT_TRUE(sender_ch.valid());
ASSERT_TRUE(receiver_ch.valid());
bool sent = sender_ch.try_send(std::string("Try Test"));
if (sent) {
buffer buf = receiver_ch.try_recv();
EXPECT_FALSE(buf.empty());
}
}
// Test timeout scenarios
TEST_F(ChannelTest, SendTimeout) {
std::string name = generate_unique_ipc_name("channel_timeout");
channel ch(name.c_str(), sender);
ASSERT_TRUE(ch.valid());
// Send with very short timeout (may fail without receiver)
bool sent = ch.send(std::string("Timeout Test"), 1);
}
// Test clear and clear_storage
TEST_F(ChannelTest, ClearStorage) {
std::string name = generate_unique_ipc_name("channel_clear");
{
channel ch(name.c_str(), sender);
EXPECT_TRUE(ch.valid());
}
channel::clear_storage(name.c_str());
}
// Test handle() method
TEST_F(ChannelTest, Handle) {
std::string name = generate_unique_ipc_name("channel_handle");
channel ch(name.c_str(), sender);
ASSERT_TRUE(ch.valid());
handle_t h = ch.handle();
EXPECT_NE(h, nullptr);
}

613
test/test_locks.cpp Normal file
View File

@ -0,0 +1,613 @@
/**
* @file test_locks.cpp
* @brief Comprehensive unit tests for ipc::rw_lock and ipc::spin_lock classes
*
* This test suite covers:
* - spin_lock: basic lock/unlock operations
* - rw_lock: read-write lock functionality
* - rw_lock: exclusive (write) locks
* - rw_lock: shared (read) locks
* - Concurrent access patterns
* - Reader-writer scenarios
*/
#include <gtest/gtest.h>
#include <thread>
#include <chrono>
#include <atomic>
#include <vector>
#include "libipc/rw_lock.h"
using namespace ipc;
// ========== spin_lock Tests ==========
class SpinLockTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(5));
}
};
// Test basic lock and unlock
TEST_F(SpinLockTest, BasicLockUnlock) {
spin_lock lock;
lock.lock();
lock.unlock();
// Should complete without hanging
}
// Test multiple lock/unlock cycles
TEST_F(SpinLockTest, MultipleCycles) {
spin_lock lock;
for (int i = 0; i < 100; ++i) {
lock.lock();
lock.unlock();
}
}
// Test critical section protection
TEST_F(SpinLockTest, CriticalSection) {
spin_lock lock;
int counter = 0;
const int iterations = 1000;
auto increment_task = [&]() {
for (int i = 0; i < iterations; ++i) {
lock.lock();
++counter;
lock.unlock();
}
};
std::thread t1(increment_task);
std::thread t2(increment_task);
t1.join();
t2.join();
EXPECT_EQ(counter, iterations * 2);
}
// Test mutual exclusion
TEST_F(SpinLockTest, MutualExclusion) {
spin_lock lock;
std::atomic<bool> thread1_in_cs{false};
std::atomic<bool> thread2_in_cs{false};
std::atomic<bool> violation{false};
auto cs_task = [&](std::atomic<bool>& my_flag, std::atomic<bool>& other_flag) {
for (int i = 0; i < 100; ++i) {
lock.lock();
my_flag.store(true);
if (other_flag.load()) {
violation.store(true);
}
std::this_thread::sleep_for(std::chrono::microseconds(10));
my_flag.store(false);
lock.unlock();
std::this_thread::yield();
}
};
std::thread t1(cs_task, std::ref(thread1_in_cs), std::ref(thread2_in_cs));
std::thread t2(cs_task, std::ref(thread2_in_cs), std::ref(thread1_in_cs));
t1.join();
t2.join();
EXPECT_FALSE(violation.load());
}
// Test concurrent access
TEST_F(SpinLockTest, ConcurrentAccess) {
spin_lock lock;
std::atomic<int> shared_data{0};
const int num_threads = 4;
const int ops_per_thread = 100;
std::vector<std::thread> threads;
for (int i = 0; i < num_threads; ++i) {
threads.emplace_back([&]() {
for (int j = 0; j < ops_per_thread; ++j) {
lock.lock();
int temp = shared_data.load();
std::this_thread::yield();
shared_data.store(temp + 1);
lock.unlock();
}
});
}
for (auto& t : threads) {
t.join();
}
EXPECT_EQ(shared_data.load(), num_threads * ops_per_thread);
}
// Test rapid lock/unlock
TEST_F(SpinLockTest, RapidLockUnlock) {
spin_lock lock;
auto rapid_task = [&]() {
for (int i = 0; i < 10000; ++i) {
lock.lock();
lock.unlock();
}
};
std::thread t1(rapid_task);
std::thread t2(rapid_task);
t1.join();
t2.join();
// Should complete without deadlock
}
// Test contention scenario
TEST_F(SpinLockTest, Contention) {
spin_lock lock;
std::atomic<int> work_done{0};
const int num_threads = 8;
std::vector<std::thread> threads;
for (int i = 0; i < num_threads; ++i) {
threads.emplace_back([&]() {
for (int j = 0; j < 50; ++j) {
lock.lock();
++work_done;
std::this_thread::sleep_for(std::chrono::microseconds(100));
lock.unlock();
std::this_thread::yield();
}
});
}
for (auto& t : threads) {
t.join();
}
EXPECT_EQ(work_done.load(), num_threads * 50);
}
// ========== rw_lock Tests ==========
class RWLockTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(5));
}
};
// Test basic write lock and unlock
TEST_F(RWLockTest, BasicWriteLock) {
rw_lock lock;
lock.lock();
lock.unlock();
// Should complete without hanging
}
// Test basic read lock and unlock
TEST_F(RWLockTest, BasicReadLock) {
rw_lock lock;
lock.lock_shared();
lock.unlock_shared();
// Should complete without hanging
}
// Test multiple write cycles
TEST_F(RWLockTest, MultipleWriteCycles) {
rw_lock lock;
for (int i = 0; i < 100; ++i) {
lock.lock();
lock.unlock();
}
}
// Test multiple read cycles
TEST_F(RWLockTest, MultipleReadCycles) {
rw_lock lock;
for (int i = 0; i < 100; ++i) {
lock.lock_shared();
lock.unlock_shared();
}
}
// Test write lock protects data
TEST_F(RWLockTest, WriteLockProtection) {
rw_lock lock;
int data = 0;
const int iterations = 500;
auto writer_task = [&]() {
for (int i = 0; i < iterations; ++i) {
lock.lock();
++data;
lock.unlock();
}
};
std::thread t1(writer_task);
std::thread t2(writer_task);
t1.join();
t2.join();
EXPECT_EQ(data, iterations * 2);
}
// Test multiple readers can access concurrently
TEST_F(RWLockTest, ConcurrentReaders) {
rw_lock lock;
std::atomic<int> concurrent_readers{0};
std::atomic<int> max_concurrent{0};
const int num_readers = 5;
std::vector<std::thread> readers;
for (int i = 0; i < num_readers; ++i) {
readers.emplace_back([&]() {
for (int j = 0; j < 20; ++j) {
lock.lock_shared();
int current = ++concurrent_readers;
// Track maximum concurrent readers
int current_max = max_concurrent.load();
while (current > current_max) {
if (max_concurrent.compare_exchange_weak(current_max, current)) {
break;
}
}
std::this_thread::sleep_for(std::chrono::microseconds(100));
--concurrent_readers;
lock.unlock_shared();
std::this_thread::yield();
}
});
}
for (auto& t : readers) {
t.join();
}
// Should have had multiple concurrent readers
EXPECT_GT(max_concurrent.load(), 1);
}
// Test writers have exclusive access
TEST_F(RWLockTest, WriterExclusiveAccess) {
rw_lock lock;
std::atomic<bool> writer_in_cs{false};
std::atomic<bool> violation{false};
auto writer_task = [&]() {
for (int i = 0; i < 50; ++i) {
lock.lock();
if (writer_in_cs.exchange(true)) {
violation.store(true);
}
std::this_thread::sleep_for(std::chrono::microseconds(50));
writer_in_cs.store(false);
lock.unlock();
std::this_thread::yield();
}
};
std::thread t1(writer_task);
std::thread t2(writer_task);
t1.join();
t2.join();
EXPECT_FALSE(violation.load());
}
// Test readers and writers don't overlap
TEST_F(RWLockTest, ReadersWritersNoOverlap) {
rw_lock lock;
std::atomic<int> readers{0};
std::atomic<bool> writer_active{false};
std::atomic<bool> violation{false};
auto reader_task = [&]() {
for (int i = 0; i < 30; ++i) {
lock.lock_shared();
++readers;
if (writer_active.load()) {
violation.store(true);
}
std::this_thread::sleep_for(std::chrono::microseconds(50));
--readers;
lock.unlock_shared();
std::this_thread::yield();
}
};
auto writer_task = [&]() {
for (int i = 0; i < 15; ++i) {
lock.lock();
writer_active.store(true);
if (readers.load() > 0) {
violation.store(true);
}
std::this_thread::sleep_for(std::chrono::microseconds(50));
writer_active.store(false);
lock.unlock();
std::this_thread::yield();
}
};
std::thread r1(reader_task);
std::thread r2(reader_task);
std::thread w1(writer_task);
r1.join();
r2.join();
w1.join();
EXPECT_FALSE(violation.load());
}
// Test read-write-read pattern
TEST_F(RWLockTest, ReadWriteReadPattern) {
rw_lock lock;
int data = 0;
std::atomic<int> iterations{0};
auto pattern_task = [&](int id) {
for (int i = 0; i < 20; ++i) {
// Write: increment based on thread id
lock.lock();
data += id;
lock.unlock();
iterations.fetch_add(1);
std::this_thread::yield();
// Read: verify data is consistent
lock.lock_shared();
int read_val = data;
EXPECT_GE(read_val, 0); // Data should be non-negative
lock.unlock_shared();
std::this_thread::yield();
}
};
std::thread t1(pattern_task, 1);
std::thread t2(pattern_task, 2);
t1.join();
t2.join();
// Each thread increments by its id (1 or 2), 20 times each
// Total = 1*20 + 2*20 = 20 + 40 = 60
EXPECT_EQ(data, 60);
EXPECT_EQ(iterations.load(), 40);
}
// Test many readers, one writer
TEST_F(RWLockTest, ManyReadersOneWriter) {
rw_lock lock;
std::atomic<int> data{0};
std::atomic<int> read_count{0};
const int num_readers = 10;
std::vector<std::thread> readers;
for (int i = 0; i < num_readers; ++i) {
readers.emplace_back([&]() {
for (int j = 0; j < 50; ++j) {
lock.lock_shared();
int val = data.load();
++read_count;
lock.unlock_shared();
std::this_thread::yield();
}
});
}
std::thread writer([&]() {
for (int i = 0; i < 100; ++i) {
lock.lock();
data.store(data.load() + 1);
lock.unlock();
std::this_thread::yield();
}
});
for (auto& t : readers) {
t.join();
}
writer.join();
EXPECT_EQ(data.load(), 100);
EXPECT_EQ(read_count.load(), num_readers * 50);
}
// Test rapid read lock/unlock
TEST_F(RWLockTest, RapidReadLocks) {
rw_lock lock;
auto rapid_read = [&]() {
for (int i = 0; i < 5000; ++i) {
lock.lock_shared();
lock.unlock_shared();
}
};
std::thread t1(rapid_read);
std::thread t2(rapid_read);
std::thread t3(rapid_read);
t1.join();
t2.join();
t3.join();
}
// Test rapid write lock/unlock
TEST_F(RWLockTest, RapidWriteLocks) {
rw_lock lock;
auto rapid_write = [&]() {
for (int i = 0; i < 2000; ++i) {
lock.lock();
lock.unlock();
}
};
std::thread t1(rapid_write);
std::thread t2(rapid_write);
t1.join();
t2.join();
}
// Test mixed rapid operations
TEST_F(RWLockTest, MixedRapidOperations) {
rw_lock lock;
auto rapid_read = [&]() {
for (int i = 0; i < 1000; ++i) {
lock.lock_shared();
lock.unlock_shared();
}
};
auto rapid_write = [&]() {
for (int i = 0; i < 500; ++i) {
lock.lock();
lock.unlock();
}
};
std::thread r1(rapid_read);
std::thread r2(rapid_read);
std::thread w1(rapid_write);
r1.join();
r2.join();
w1.join();
}
// Test write lock doesn't allow concurrent readers
TEST_F(RWLockTest, WriteLockBlocksReaders) {
rw_lock lock;
std::atomic<bool> write_locked{false};
std::atomic<bool> reader_entered{false};
std::thread writer([&]() {
lock.lock();
write_locked.store(true);
std::this_thread::sleep_for(std::chrono::milliseconds(100));
write_locked.store(false);
lock.unlock();
});
std::thread reader([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(20));
lock.lock_shared();
if (write_locked.load()) {
reader_entered.store(true);
}
lock.unlock_shared();
});
writer.join();
reader.join();
// Reader should not have entered while writer held the lock
EXPECT_FALSE(reader_entered.load());
}
// Test multiple write lock upgrades
TEST_F(RWLockTest, MultipleWriteLockPattern) {
rw_lock lock;
int data = 0;
for (int i = 0; i < 100; ++i) {
// Read
lock.lock_shared();
int temp = data;
lock.unlock_shared();
// Write
lock.lock();
data = temp + 1;
lock.unlock();
}
EXPECT_EQ(data, 100);
}
// Test concurrent mixed operations
TEST_F(RWLockTest, ConcurrentMixedOperations) {
rw_lock lock;
std::atomic<int> data{0};
std::atomic<int> reads{0};
std::atomic<int> writes{0};
auto mixed_task = [&](int id) {
for (int i = 0; i < 50; ++i) {
if (i % 3 == 0) {
// Write operation
lock.lock();
data.store(data.load() + 1);
++writes;
lock.unlock();
} else {
// Read operation
lock.lock_shared();
int val = data.load();
++reads;
lock.unlock_shared();
}
std::this_thread::yield();
}
};
std::thread t1(mixed_task, 1);
std::thread t2(mixed_task, 2);
std::thread t3(mixed_task, 3);
std::thread t4(mixed_task, 4);
t1.join();
t2.join();
t3.join();
t4.join();
EXPECT_GT(reads.load(), 0);
EXPECT_GT(writes.load(), 0);
}

501
test/test_mutex.cpp Normal file
View File

@ -0,0 +1,501 @@
/**
* @file test_mutex.cpp
* @brief Comprehensive unit tests for ipc::sync::mutex class
*
* This test suite covers:
* - Mutex construction (default and named)
* - Lock/unlock operations
* - Try-lock functionality
* - Timed lock with timeout
* - Named mutex for inter-process synchronization
* - Resource cleanup (clear, clear_storage)
* - Native handle access
* - Concurrent access scenarios
*/
#include <gtest/gtest.h>
#include <thread>
#include <chrono>
#include <atomic>
#include <vector>
#include "libipc/mutex.h"
#include "libipc/def.h"
using namespace ipc;
using namespace ipc::sync;
namespace {
// Generate unique mutex names for tests
std::string generate_unique_mutex_name(const char* prefix) {
static int counter = 0;
return std::string(prefix) + "_mutex_" + std::to_string(++counter);
}
} // anonymous namespace
class MutexTest : public ::testing::Test {
protected:
void TearDown() override {
// Allow time for cleanup
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
};
// Test default constructor
TEST_F(MutexTest, DefaultConstructor) {
mutex mtx;
// Default constructed mutex may or may not be valid depending on implementation
// Just ensure it doesn't crash
}
// Test named constructor
TEST_F(MutexTest, NamedConstructor) {
std::string name = generate_unique_mutex_name("named_ctor");
mutex mtx(name.c_str());
EXPECT_TRUE(mtx.valid());
}
// Test native() const method
TEST_F(MutexTest, NativeConst) {
std::string name = generate_unique_mutex_name("native_const");
const mutex mtx(name.c_str());
const void* native_handle = mtx.native();
EXPECT_NE(native_handle, nullptr);
}
// Test native() non-const method
TEST_F(MutexTest, NativeNonConst) {
std::string name = generate_unique_mutex_name("native_nonconst");
mutex mtx(name.c_str());
void* native_handle = mtx.native();
EXPECT_NE(native_handle, nullptr);
}
// Test valid() method
TEST_F(MutexTest, Valid) {
mutex mtx1;
// May or may not be valid without open
std::string name = generate_unique_mutex_name("valid");
mutex mtx2(name.c_str());
EXPECT_TRUE(mtx2.valid());
}
// Test open() method
TEST_F(MutexTest, Open) {
std::string name = generate_unique_mutex_name("open");
mutex mtx;
bool result = mtx.open(name.c_str());
EXPECT_TRUE(result);
EXPECT_TRUE(mtx.valid());
}
// Test close() method
TEST_F(MutexTest, Close) {
std::string name = generate_unique_mutex_name("close");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
mtx.close();
EXPECT_FALSE(mtx.valid());
}
// Test clear() method
TEST_F(MutexTest, Clear) {
std::string name = generate_unique_mutex_name("clear");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
mtx.clear();
EXPECT_FALSE(mtx.valid());
}
// Test clear_storage() static method
TEST_F(MutexTest, ClearStorage) {
std::string name = generate_unique_mutex_name("clear_storage");
{
mutex mtx(name.c_str());
EXPECT_TRUE(mtx.valid());
}
mutex::clear_storage(name.c_str());
// Try to open after clear - should create new or fail gracefully
mutex mtx2(name.c_str());
}
// Test basic lock and unlock
TEST_F(MutexTest, LockUnlock) {
std::string name = generate_unique_mutex_name("lock_unlock");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
bool locked = mtx.lock();
EXPECT_TRUE(locked);
bool unlocked = mtx.unlock();
EXPECT_TRUE(unlocked);
}
// Test try_lock
TEST_F(MutexTest, TryLock) {
std::string name = generate_unique_mutex_name("try_lock");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
bool locked = mtx.try_lock();
EXPECT_TRUE(locked);
if (locked) {
mtx.unlock();
}
}
// Test timed lock with infinite timeout
TEST_F(MutexTest, TimedLockInfinite) {
std::string name = generate_unique_mutex_name("timed_lock_inf");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
bool locked = mtx.lock(invalid_value);
EXPECT_TRUE(locked);
if (locked) {
mtx.unlock();
}
}
// Test timed lock with timeout
TEST_F(MutexTest, TimedLockTimeout) {
std::string name = generate_unique_mutex_name("timed_lock_timeout");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
// Lock with 100ms timeout
bool locked = mtx.lock(100);
EXPECT_TRUE(locked);
if (locked) {
mtx.unlock();
}
}
// Test mutex protects critical section
TEST_F(MutexTest, CriticalSection) {
std::string name = generate_unique_mutex_name("critical_section");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
int shared_counter = 0;
const int iterations = 100;
auto increment_task = [&]() {
for (int i = 0; i < iterations; ++i) {
mtx.lock();
++shared_counter;
mtx.unlock();
}
};
std::thread t1(increment_task);
std::thread t2(increment_task);
t1.join();
t2.join();
EXPECT_EQ(shared_counter, iterations * 2);
}
// Test concurrent try_lock
TEST_F(MutexTest, ConcurrentTryLock) {
std::string name = generate_unique_mutex_name("concurrent_try");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
std::atomic<int> success_count{0};
std::atomic<int> fail_count{0};
auto try_lock_task = [&]() {
for (int i = 0; i < 10; ++i) {
if (mtx.try_lock()) {
++success_count;
std::this_thread::sleep_for(std::chrono::milliseconds(1));
mtx.unlock();
} else {
++fail_count;
}
std::this_thread::yield();
}
};
std::thread t1(try_lock_task);
std::thread t2(try_lock_task);
std::thread t3(try_lock_task);
t1.join();
t2.join();
t3.join();
EXPECT_GT(success_count.load(), 0);
// Some try_locks should succeed
}
// Test lock contention
TEST_F(MutexTest, LockContention) {
std::string name = generate_unique_mutex_name("contention");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
std::atomic<bool> thread1_in_cs{false};
std::atomic<bool> thread2_in_cs{false};
std::atomic<bool> violation{false};
auto contention_task = [&](std::atomic<bool>& my_flag,
std::atomic<bool>& other_flag) {
for (int i = 0; i < 50; ++i) {
mtx.lock();
my_flag.store(true);
if (other_flag.load()) {
violation.store(true);
}
std::this_thread::sleep_for(std::chrono::microseconds(10));
my_flag.store(false);
mtx.unlock();
std::this_thread::yield();
}
};
std::thread t1(contention_task, std::ref(thread1_in_cs), std::ref(thread2_in_cs));
std::thread t2(contention_task, std::ref(thread2_in_cs), std::ref(thread1_in_cs));
t1.join();
t2.join();
// Should never have both threads in critical section
EXPECT_FALSE(violation.load());
}
// Test multiple lock/unlock cycles
TEST_F(MutexTest, MultipleCycles) {
std::string name = generate_unique_mutex_name("cycles");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
for (int i = 0; i < 100; ++i) {
ASSERT_TRUE(mtx.lock());
ASSERT_TRUE(mtx.unlock());
}
}
// Test timed lock timeout scenario
TEST_F(MutexTest, TimedLockTimeoutScenario) {
std::string name = generate_unique_mutex_name("timeout_scenario");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
// Lock in main thread
ASSERT_TRUE(mtx.lock());
std::atomic<bool> timeout_occurred{false};
std::thread t([&]() {
// Try to lock with short timeout - should timeout
bool locked = mtx.lock(50); // 50ms timeout
if (!locked) {
timeout_occurred.store(true);
} else {
mtx.unlock();
}
});
std::this_thread::sleep_for(std::chrono::milliseconds(100));
mtx.unlock();
t.join();
// Timeout should have occurred since we held the lock
EXPECT_TRUE(timeout_occurred.load());
}
// Test reopen after close
TEST_F(MutexTest, ReopenAfterClose) {
std::string name = generate_unique_mutex_name("reopen");
mutex mtx;
ASSERT_TRUE(mtx.open(name.c_str()));
EXPECT_TRUE(mtx.valid());
mtx.close();
EXPECT_FALSE(mtx.valid());
ASSERT_TRUE(mtx.open(name.c_str()));
EXPECT_TRUE(mtx.valid());
}
// Test named mutex inter-thread synchronization
TEST_F(MutexTest, NamedMutexInterThread) {
std::string name = generate_unique_mutex_name("inter_thread");
int shared_data = 0;
std::atomic<bool> t1_done{false};
std::atomic<bool> t2_done{false};
std::thread t1([&]() {
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
mtx.lock();
shared_data = 100;
std::this_thread::sleep_for(std::chrono::milliseconds(50));
mtx.unlock();
t1_done.store(true);
});
std::thread t2([&]() {
// Wait a bit to ensure t1 starts first
std::this_thread::sleep_for(std::chrono::milliseconds(10));
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
mtx.lock();
EXPECT_TRUE(t1_done.load() || shared_data == 100);
shared_data = 200;
mtx.unlock();
t2_done.store(true);
});
t1.join();
t2.join();
EXPECT_EQ(shared_data, 200);
}
// Test exception safety of try_lock
TEST_F(MutexTest, TryLockExceptionSafety) {
std::string name = generate_unique_mutex_name("try_lock_exception");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
bool exception_thrown = false;
try {
mtx.try_lock();
} catch (const std::system_error&) {
exception_thrown = true;
} catch (...) {
FAIL() << "Unexpected exception type";
}
// try_lock may throw system_error
// Just ensure we can handle it
}
// Test concurrent open/close operations
TEST_F(MutexTest, ConcurrentOpenClose) {
std::vector<std::thread> threads;
std::atomic<int> success_count{0};
for (int i = 0; i < 5; ++i) {
threads.emplace_back([&, i]() {
std::string name = generate_unique_mutex_name("concurrent");
name += std::to_string(i);
mutex mtx;
if (mtx.open(name.c_str())) {
++success_count;
mtx.close();
}
});
}
for (auto& t : threads) {
t.join();
}
EXPECT_EQ(success_count.load(), 5);
}
// Test mutex with zero timeout
TEST_F(MutexTest, ZeroTimeout) {
std::string name = generate_unique_mutex_name("zero_timeout");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
// Lock with zero timeout (should try once and return)
bool locked = mtx.lock(0);
if (locked) {
mtx.unlock();
}
// Result may vary, just ensure it doesn't hang
}
// Test rapid lock/unlock sequence
TEST_F(MutexTest, RapidLockUnlock) {
std::string name = generate_unique_mutex_name("rapid");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
auto rapid_task = [&]() {
for (int i = 0; i < 1000; ++i) {
mtx.lock();
mtx.unlock();
}
};
std::thread t1(rapid_task);
std::thread t2(rapid_task);
t1.join();
t2.join();
// Should complete without deadlock or crash
}
// Test lock after clear
TEST_F(MutexTest, LockAfterClear) {
std::string name = generate_unique_mutex_name("lock_after_clear");
mutex mtx(name.c_str());
ASSERT_TRUE(mtx.valid());
mtx.lock();
mtx.unlock();
mtx.clear();
EXPECT_FALSE(mtx.valid());
// Attempting to lock after clear should fail gracefully
bool locked = mtx.lock();
EXPECT_FALSE(locked);
}

487
test/test_semaphore.cpp Normal file
View File

@ -0,0 +1,487 @@
/**
* @file test_semaphore.cpp
* @brief Comprehensive unit tests for ipc::sync::semaphore class
*
* This test suite covers:
* - Semaphore construction (default and named with count)
* - Wait and post operations
* - Timed wait with timeout
* - Named semaphore for inter-process synchronization
* - Resource cleanup (clear, clear_storage)
* - Producer-consumer patterns
* - Multiple wait/post scenarios
*/
#include <gtest/gtest.h>
#include <thread>
#include <chrono>
#include <atomic>
#include <vector>
#include "libipc/semaphore.h"
#include "libipc/def.h"
using namespace ipc;
using namespace ipc::sync;
namespace {
std::string generate_unique_sem_name(const char* prefix) {
static int counter = 0;
return std::string(prefix) + "_sem_" + std::to_string(++counter);
}
} // anonymous namespace
class SemaphoreTest : public ::testing::Test {
protected:
void TearDown() override {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
};
// Test default constructor
TEST_F(SemaphoreTest, DefaultConstructor) {
semaphore sem;
// Default constructed semaphore
}
// Test named constructor with count
TEST_F(SemaphoreTest, NamedConstructorWithCount) {
std::string name = generate_unique_sem_name("named_count");
semaphore sem(name.c_str(), 5);
EXPECT_TRUE(sem.valid());
}
// Test named constructor with zero count
TEST_F(SemaphoreTest, NamedConstructorZeroCount) {
std::string name = generate_unique_sem_name("zero_count");
semaphore sem(name.c_str(), 0);
EXPECT_TRUE(sem.valid());
}
// Test native() methods
TEST_F(SemaphoreTest, NativeHandle) {
std::string name = generate_unique_sem_name("native");
semaphore sem(name.c_str(), 1);
ASSERT_TRUE(sem.valid());
const void* const_handle = static_cast<const semaphore&>(sem).native();
void* handle = sem.native();
EXPECT_NE(const_handle, nullptr);
EXPECT_NE(handle, nullptr);
}
// Test valid() method
TEST_F(SemaphoreTest, Valid) {
semaphore sem1;
std::string name = generate_unique_sem_name("valid");
semaphore sem2(name.c_str(), 1);
EXPECT_TRUE(sem2.valid());
}
// Test open() method
TEST_F(SemaphoreTest, Open) {
std::string name = generate_unique_sem_name("open");
semaphore sem;
bool result = sem.open(name.c_str(), 3);
EXPECT_TRUE(result);
EXPECT_TRUE(sem.valid());
}
// Test close() method
TEST_F(SemaphoreTest, Close) {
std::string name = generate_unique_sem_name("close");
semaphore sem(name.c_str(), 1);
ASSERT_TRUE(sem.valid());
sem.close();
EXPECT_FALSE(sem.valid());
}
// Test clear() method
TEST_F(SemaphoreTest, Clear) {
std::string name = generate_unique_sem_name("clear");
semaphore sem(name.c_str(), 1);
ASSERT_TRUE(sem.valid());
sem.clear();
EXPECT_FALSE(sem.valid());
}
// Test clear_storage() static method
TEST_F(SemaphoreTest, ClearStorage) {
std::string name = generate_unique_sem_name("clear_storage");
{
semaphore sem(name.c_str(), 1);
EXPECT_TRUE(sem.valid());
}
semaphore::clear_storage(name.c_str());
}
// Test basic wait and post
TEST_F(SemaphoreTest, WaitPost) {
std::string name = generate_unique_sem_name("wait_post");
semaphore sem(name.c_str(), 1);
ASSERT_TRUE(sem.valid());
bool waited = sem.wait();
EXPECT_TRUE(waited);
bool posted = sem.post();
EXPECT_TRUE(posted);
}
// Test post with count
TEST_F(SemaphoreTest, PostWithCount) {
std::string name = generate_unique_sem_name("post_count");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
bool posted = sem.post(5);
EXPECT_TRUE(posted);
// Now should be able to wait 5 times
for (int i = 0; i < 5; ++i) {
EXPECT_TRUE(sem.wait(10)); // 10ms timeout
}
}
// Test timed wait with timeout
TEST_F(SemaphoreTest, TimedWait) {
std::string name = generate_unique_sem_name("timed_wait");
semaphore sem(name.c_str(), 1);
ASSERT_TRUE(sem.valid());
bool waited = sem.wait(100); // 100ms timeout
EXPECT_TRUE(waited);
}
// Test wait timeout scenario
TEST_F(SemaphoreTest, WaitTimeout) {
std::string name = generate_unique_sem_name("wait_timeout");
semaphore sem(name.c_str(), 0); // Zero count
ASSERT_TRUE(sem.valid());
auto start = std::chrono::steady_clock::now();
bool waited = sem.wait(50); // 50ms timeout
auto end = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
// Should timeout
EXPECT_FALSE(waited);
EXPECT_GE(elapsed, 40); // Allow some tolerance
}
// Test infinite wait
TEST_F(SemaphoreTest, InfiniteWait) {
std::string name = generate_unique_sem_name("infinite_wait");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
std::atomic<bool> wait_started{false};
std::atomic<bool> wait_succeeded{false};
std::thread waiter([&]() {
wait_started.store(true);
bool result = sem.wait(invalid_value);
wait_succeeded.store(result);
});
// Wait for thread to start waiting
while (!wait_started.load()) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
std::this_thread::sleep_for(std::chrono::milliseconds(50));
// Post to release the waiter
sem.post();
waiter.join();
EXPECT_TRUE(wait_succeeded.load());
}
// Test producer-consumer pattern
TEST_F(SemaphoreTest, ProducerConsumer) {
std::string name = generate_unique_sem_name("prod_cons");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
std::atomic<int> produced{0};
std::atomic<int> consumed{0};
const int count = 10;
std::thread producer([&]() {
for (int i = 0; i < count; ++i) {
++produced;
sem.post();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
});
std::thread consumer([&]() {
for (int i = 0; i < count; ++i) {
sem.wait();
++consumed;
}
});
producer.join();
consumer.join();
EXPECT_EQ(produced.load(), count);
EXPECT_EQ(consumed.load(), count);
}
// Test multiple producers and consumers
TEST_F(SemaphoreTest, MultipleProducersConsumers) {
std::string name = generate_unique_sem_name("multi_prod_cons");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
std::atomic<int> total_produced{0};
std::atomic<int> total_consumed{0};
const int items_per_producer = 5;
const int num_producers = 3;
const int num_consumers = 3;
std::vector<std::thread> producers;
for (int i = 0; i < num_producers; ++i) {
producers.emplace_back([&]() {
for (int j = 0; j < items_per_producer; ++j) {
++total_produced;
sem.post();
std::this_thread::yield();
}
});
}
std::vector<std::thread> consumers;
for (int i = 0; i < num_consumers; ++i) {
consumers.emplace_back([&]() {
for (int j = 0; j < items_per_producer; ++j) {
if (sem.wait(1000)) {
++total_consumed;
}
}
});
}
for (auto& t : producers) t.join();
for (auto& t : consumers) t.join();
EXPECT_EQ(total_produced.load(), items_per_producer * num_producers);
EXPECT_EQ(total_consumed.load(), items_per_producer * num_producers);
}
// Test semaphore with initial count
TEST_F(SemaphoreTest, InitialCount) {
std::string name = generate_unique_sem_name("initial_count");
const uint32_t initial = 3;
semaphore sem(name.c_str(), initial);
ASSERT_TRUE(sem.valid());
// Should be able to wait 'initial' times without blocking
for (uint32_t i = 0; i < initial; ++i) {
EXPECT_TRUE(sem.wait(10));
}
// Next wait should timeout
EXPECT_FALSE(sem.wait(10));
}
// Test rapid post operations
TEST_F(SemaphoreTest, RapidPost) {
std::string name = generate_unique_sem_name("rapid_post");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
const int post_count = 100;
for (int i = 0; i < post_count; ++i) {
EXPECT_TRUE(sem.post());
}
// Should be able to wait post_count times
int wait_count = 0;
for (int i = 0; i < post_count; ++i) {
if (sem.wait(10)) {
++wait_count;
}
}
EXPECT_EQ(wait_count, post_count);
}
// Test concurrent post operations
TEST_F(SemaphoreTest, ConcurrentPost) {
std::string name = generate_unique_sem_name("concurrent_post");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
std::atomic<int> post_count{0};
const int threads = 5;
const int posts_per_thread = 10;
std::vector<std::thread> posters;
for (int i = 0; i < threads; ++i) {
posters.emplace_back([&]() {
for (int j = 0; j < posts_per_thread; ++j) {
if (sem.post()) {
++post_count;
}
}
});
}
for (auto& t : posters) t.join();
EXPECT_EQ(post_count.load(), threads * posts_per_thread);
// Verify by consuming
int consumed = 0;
for (int i = 0; i < threads * posts_per_thread; ++i) {
if (sem.wait(10)) {
++consumed;
}
}
EXPECT_EQ(consumed, threads * posts_per_thread);
}
// Test reopen after close
TEST_F(SemaphoreTest, ReopenAfterClose) {
std::string name = generate_unique_sem_name("reopen");
semaphore sem;
ASSERT_TRUE(sem.open(name.c_str(), 2));
EXPECT_TRUE(sem.valid());
sem.close();
EXPECT_FALSE(sem.valid());
ASSERT_TRUE(sem.open(name.c_str(), 3));
EXPECT_TRUE(sem.valid());
}
// Test named semaphore sharing between threads
TEST_F(SemaphoreTest, NamedSemaphoreSharing) {
std::string name = generate_unique_sem_name("sharing");
std::atomic<int> value{0};
std::thread t1([&]() {
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
sem.wait(); // Wait for signal
value.store(100);
});
std::thread t2([&]() {
std::this_thread::sleep_for(std::chrono::milliseconds(50));
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
sem.post(); // Signal t1
});
t1.join();
t2.join();
EXPECT_EQ(value.load(), 100);
}
// Test post multiple count at once
TEST_F(SemaphoreTest, PostMultiple) {
std::string name = generate_unique_sem_name("post_multiple");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
const uint32_t count = 10;
bool posted = sem.post(count);
EXPECT_TRUE(posted);
// Consume all
for (uint32_t i = 0; i < count; ++i) {
EXPECT_TRUE(sem.wait(10));
}
// Should be empty now
EXPECT_FALSE(sem.wait(10));
}
// Test semaphore after clear
TEST_F(SemaphoreTest, AfterClear) {
std::string name = generate_unique_sem_name("after_clear");
semaphore sem(name.c_str(), 5);
ASSERT_TRUE(sem.valid());
sem.wait();
sem.clear();
EXPECT_FALSE(sem.valid());
// Operations after clear should fail gracefully
EXPECT_FALSE(sem.wait(10));
EXPECT_FALSE(sem.post());
}
// Test zero timeout
TEST_F(SemaphoreTest, ZeroTimeout) {
std::string name = generate_unique_sem_name("zero_timeout");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
bool waited = sem.wait(0);
// Should return immediately (either success or timeout)
}
// Test high-frequency wait/post
TEST_F(SemaphoreTest, HighFrequency) {
std::string name = generate_unique_sem_name("high_freq");
semaphore sem(name.c_str(), 0);
ASSERT_TRUE(sem.valid());
std::thread poster([&]() {
for (int i = 0; i < 1000; ++i) {
sem.post();
}
});
std::thread waiter([&]() {
for (int i = 0; i < 1000; ++i) {
sem.wait(100);
}
});
poster.join();
waiter.join();
}

605
test/test_shm.cpp Executable file → Normal file
View File

@ -1,134 +1,521 @@
#include <cstring>
#include <cstdint>
#include <thread>
/**
* @file test_shm.cpp
* @brief Comprehensive unit tests for ipc::shm (shared memory) functionality
*
* This test suite covers:
* - Low-level shared memory functions (acquire, get_mem, release, remove)
* - Reference counting (get_ref, sub_ref)
* - High-level handle class interface
* - Create and open modes
* - Resource cleanup and error handling
*/
#include <gtest/gtest.h>
#include <cstring>
#include <memory>
#include <string>
#include "libipc/shm.h"
#include "test.h"
using namespace ipc::shm;
using namespace ipc;
namespace {
TEST(SHM, acquire) {
handle shm_hd;
EXPECT_FALSE(shm_hd.valid());
EXPECT_TRUE(shm_hd.acquire("my-test-1", 1024));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-1");
EXPECT_TRUE(shm_hd.acquire("my-test-2", 2048));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-2");
EXPECT_TRUE(shm_hd.acquire("my-test-3", 4096));
EXPECT_TRUE(shm_hd.valid());
EXPECT_STREQ(shm_hd.name(), "my-test-3");
// Generate unique shared memory names for tests
std::string generate_unique_name(const char* prefix) {
static int counter = 0;
return std::string(prefix) + "_test_" + std::to_string(++counter);
}
TEST(SHM, release) {
handle shm_hd;
EXPECT_FALSE(shm_hd.valid());
shm_hd.release();
EXPECT_FALSE(shm_hd.valid());
EXPECT_TRUE(shm_hd.acquire("release-test-1", 512));
EXPECT_TRUE(shm_hd.valid());
shm_hd.release();
EXPECT_FALSE(shm_hd.valid());
} // anonymous namespace
class ShmTest : public ::testing::Test {
protected:
void TearDown() override {
// Clean up any leftover shared memory segments
}
};
// ========== Low-level API Tests ==========
// Test acquire with create mode
TEST_F(ShmTest, AcquireCreate) {
std::string name = generate_unique_name("acquire_create");
const std::size_t size = 1024;
shm::id_t id = shm::acquire(name.c_str(), size, shm::create);
ASSERT_NE(id, nullptr);
std::size_t actual_size = 0;
void* mem = shm::get_mem(id, &actual_size);
EXPECT_NE(mem, nullptr);
EXPECT_GE(actual_size, size);
// Use remove(id) to clean up - it internally calls release()
shm::remove(id);
}
TEST(SHM, get) {
handle shm_hd;
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_TRUE(shm_hd.acquire("get-test", 2048));
// Test acquire with open mode (should fail if not exists)
TEST_F(ShmTest, AcquireOpenNonExistent) {
std::string name = generate_unique_name("acquire_open_fail");
auto mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
EXPECT_TRUE(mem == shm_hd.get());
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(mem, buf, sizeof(buf)) == 0);
handle shm_other(shm_hd.name(), shm_hd.size());
EXPECT_TRUE(shm_other.get() != shm_hd.get());
shm::id_t id = shm::acquire(name.c_str(), 1024, shm::open);
// Opening non-existent shared memory should return nullptr or handle failure gracefully
if (id != nullptr) {
shm::release(id);
}
}
TEST(SHM, hello) {
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("hello-test", 128));
auto mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
// Test acquire with both create and open modes
TEST_F(ShmTest, AcquireCreateOrOpen) {
std::string name = generate_unique_name("acquire_both");
const std::size_t size = 2048;
constexpr char hello[] = "hello!";
std::memcpy(mem, hello, sizeof(hello));
EXPECT_STREQ((char const *)shm_hd.get(), hello);
shm::id_t id = shm::acquire(name.c_str(), size, shm::create | shm::open);
ASSERT_NE(id, nullptr);
shm_hd.release();
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_TRUE(shm_hd.acquire("hello-test", 1024));
std::size_t actual_size = 0;
void* mem = shm::get_mem(id, &actual_size);
EXPECT_NE(mem, nullptr);
EXPECT_GE(actual_size, size);
mem = shm_hd.get();
EXPECT_TRUE(mem != nullptr);
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(mem, buf, sizeof(buf)) == 0);
std::memcpy(mem, hello, sizeof(hello));
EXPECT_STREQ((char const *)shm_hd.get(), hello);
// Use remove(id) to clean up - it internally calls release()
shm::remove(id);
}
TEST(SHM, mt) {
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
constexpr char hello[] = "hello!";
std::memcpy(shm_hd.get(), hello, sizeof(hello));
// Test get_mem function
TEST_F(ShmTest, GetMemory) {
std::string name = generate_unique_name("get_mem");
const std::size_t size = 512;
std::thread {
[&shm_hd] {
handle shm_mt(shm_hd.name(), shm_hd.size());
shm_hd.release();
constexpr char hello[] = "hello!";
EXPECT_STREQ((char const *)shm_mt.get(), hello);
}
}.join();
shm::id_t id = shm::acquire(name.c_str(), size, shm::create);
ASSERT_NE(id, nullptr);
EXPECT_TRUE(shm_hd.get() == nullptr);
EXPECT_FALSE(shm_hd.valid());
std::size_t returned_size = 0;
void* mem = shm::get_mem(id, &returned_size);
EXPECT_TRUE(shm_hd.acquire("mt-test", 1024));
std::uint8_t buf[1024] = {};
EXPECT_TRUE(memcmp(shm_hd.get(), buf, sizeof(buf)) == 0);
EXPECT_NE(mem, nullptr);
EXPECT_GE(returned_size, size);
// Write and read test data
const char* test_data = "Shared memory test data";
std::strcpy(static_cast<char*>(mem), test_data);
EXPECT_STREQ(static_cast<char*>(mem), test_data);
// Use remove(id) to clean up - it internally calls release()
shm::remove(id);
}
TEST(SHM, remove) {
{
auto id = ipc::shm::acquire("hello-remove", 111);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::remove(id);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", false));
}
{
auto id = ipc::shm::acquire("hello-remove", 111);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::release(id);
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", true));
ipc::shm::remove("hello-remove");
EXPECT_TRUE(ipc_ut::expect_exist("hello-remove", false));
}
{
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", true));
shm_hd.clear();
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", false));
}
{
handle shm_hd;
EXPECT_TRUE(shm_hd.acquire("mt-test", 256));
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", true));
shm_hd.clear_storage("mt-test");
EXPECT_TRUE(ipc_ut::expect_exist("mt-test", false));
}
// Test get_mem without size parameter
TEST_F(ShmTest, GetMemoryNoSize) {
std::string name = generate_unique_name("get_mem_no_size");
shm::id_t id = shm::acquire(name.c_str(), 256, shm::create);
ASSERT_NE(id, nullptr);
void* mem = shm::get_mem(id, nullptr);
EXPECT_NE(mem, nullptr);
// Use remove(id) to clean up - it internally calls release()
shm::remove(id);
}
} // internal-linkage
// Test release function
TEST_F(ShmTest, ReleaseMemory) {
std::string name = generate_unique_name("release");
shm::id_t id = shm::acquire(name.c_str(), 128, shm::create);
ASSERT_NE(id, nullptr);
// Must call get_mem to map memory and set reference count
void* mem = shm::get_mem(id, nullptr);
ASSERT_NE(mem, nullptr);
// release returns the reference count before decrement, or -1 on error
std::int32_t ref_count = shm::release(id);
EXPECT_EQ(ref_count, 1); // Should be 1 (set by get_mem, before decrement)
shm::remove(name.c_str());
}
// Test remove by id
TEST_F(ShmTest, RemoveById) {
std::string name = generate_unique_name("remove_by_id");
shm::id_t id = shm::acquire(name.c_str(), 256, shm::create);
ASSERT_NE(id, nullptr);
// remove(id) internally calls release(id), so we don't need to call release first
shm::remove(id); // Should succeed
}
// Test remove by name
TEST_F(ShmTest, RemoveByName) {
std::string name = generate_unique_name("remove_by_name");
shm::id_t id = shm::acquire(name.c_str(), 256, shm::create);
ASSERT_NE(id, nullptr);
shm::release(id);
shm::remove(name.c_str()); // Should succeed
}
// Test reference counting
TEST_F(ShmTest, ReferenceCount) {
std::string name = generate_unique_name("ref_count");
shm::id_t id1 = shm::acquire(name.c_str(), 512, shm::create);
ASSERT_NE(id1, nullptr);
// Reference count is 0 after acquire (memory not mapped yet)
std::int32_t ref_before_get_mem = shm::get_ref(id1);
EXPECT_EQ(ref_before_get_mem, 0);
// get_mem maps memory and sets reference count to 1
void* mem1 = shm::get_mem(id1, nullptr);
ASSERT_NE(mem1, nullptr);
std::int32_t ref1 = shm::get_ref(id1);
EXPECT_EQ(ref1, 1);
// Acquire again and get_mem (should increase reference count)
shm::id_t id2 = shm::acquire(name.c_str(), 512, shm::open);
if (id2 != nullptr) {
void* mem2 = shm::get_mem(id2, nullptr);
ASSERT_NE(mem2, nullptr);
std::int32_t ref2 = shm::get_ref(id2);
EXPECT_EQ(ref2, 2); // Should be 2 now
shm::release(id2);
}
shm::release(id1);
shm::remove(name.c_str());
}
// Test sub_ref function
TEST_F(ShmTest, SubtractReference) {
std::string name = generate_unique_name("sub_ref");
shm::id_t id = shm::acquire(name.c_str(), 256, shm::create);
ASSERT_NE(id, nullptr);
// Must call get_mem first to map memory and initialize reference count
void* mem = shm::get_mem(id, nullptr);
ASSERT_NE(mem, nullptr);
std::int32_t ref_before = shm::get_ref(id);
EXPECT_EQ(ref_before, 1); // Should be 1 after get_mem
shm::sub_ref(id);
std::int32_t ref_after = shm::get_ref(id);
EXPECT_EQ(ref_after, 0); // Should be 0 after sub_ref
// Use remove(id) to clean up - it internally calls release()
shm::remove(id);
}
// ========== High-level handle class Tests ==========
// Test default handle constructor
TEST_F(ShmTest, HandleDefaultConstructor) {
shm::handle h;
EXPECT_FALSE(h.valid());
EXPECT_EQ(h.size(), 0u);
EXPECT_EQ(h.get(), nullptr);
}
// Test handle constructor with name and size
TEST_F(ShmTest, HandleConstructorWithParams) {
std::string name = generate_unique_name("handle_ctor");
const std::size_t size = 1024;
shm::handle h(name.c_str(), size);
EXPECT_TRUE(h.valid());
EXPECT_GE(h.size(), size);
EXPECT_NE(h.get(), nullptr);
EXPECT_STREQ(h.name(), name.c_str());
}
// Test handle move constructor
TEST_F(ShmTest, HandleMoveConstructor) {
std::string name = generate_unique_name("handle_move");
shm::handle h1(name.c_str(), 512);
ASSERT_TRUE(h1.valid());
void* ptr1 = h1.get();
std::size_t size1 = h1.size();
shm::handle h2(std::move(h1));
EXPECT_TRUE(h2.valid());
EXPECT_EQ(h2.get(), ptr1);
EXPECT_EQ(h2.size(), size1);
// h1 should be invalid after move
EXPECT_FALSE(h1.valid());
}
// Test handle swap
TEST_F(ShmTest, HandleSwap) {
std::string name1 = generate_unique_name("handle_swap1");
std::string name2 = generate_unique_name("handle_swap2");
shm::handle h1(name1.c_str(), 256);
shm::handle h2(name2.c_str(), 512);
void* ptr1 = h1.get();
void* ptr2 = h2.get();
std::size_t size1 = h1.size();
std::size_t size2 = h2.size();
h1.swap(h2);
EXPECT_EQ(h1.get(), ptr2);
EXPECT_EQ(h1.size(), size2);
EXPECT_EQ(h2.get(), ptr1);
EXPECT_EQ(h2.size(), size1);
}
// Test handle assignment operator
TEST_F(ShmTest, HandleAssignment) {
std::string name = generate_unique_name("handle_assign");
shm::handle h1(name.c_str(), 768);
void* ptr1 = h1.get();
shm::handle h2;
h2 = std::move(h1);
EXPECT_TRUE(h2.valid());
EXPECT_EQ(h2.get(), ptr1);
EXPECT_FALSE(h1.valid());
}
// Test handle valid() method
TEST_F(ShmTest, HandleValid) {
shm::handle h1;
EXPECT_FALSE(h1.valid());
std::string name = generate_unique_name("handle_valid");
shm::handle h2(name.c_str(), 128);
EXPECT_TRUE(h2.valid());
}
// Test handle size() method
TEST_F(ShmTest, HandleSize) {
std::string name = generate_unique_name("handle_size");
const std::size_t requested_size = 2048;
shm::handle h(name.c_str(), requested_size);
EXPECT_GE(h.size(), requested_size);
}
// Test handle name() method
TEST_F(ShmTest, HandleName) {
std::string name = generate_unique_name("handle_name");
shm::handle h(name.c_str(), 256);
EXPECT_STREQ(h.name(), name.c_str());
}
// Test handle ref() method
TEST_F(ShmTest, HandleRef) {
std::string name = generate_unique_name("handle_ref");
shm::handle h(name.c_str(), 256);
std::int32_t ref = h.ref();
EXPECT_GT(ref, 0);
}
// Test handle sub_ref() method
TEST_F(ShmTest, HandleSubRef) {
std::string name = generate_unique_name("handle_sub_ref");
shm::handle h(name.c_str(), 256);
std::int32_t ref_before = h.ref();
h.sub_ref();
std::int32_t ref_after = h.ref();
EXPECT_EQ(ref_after, ref_before - 1);
}
// Test handle acquire() method
TEST_F(ShmTest, HandleAcquire) {
shm::handle h;
EXPECT_FALSE(h.valid());
std::string name = generate_unique_name("handle_acquire");
bool result = h.acquire(name.c_str(), 512);
EXPECT_TRUE(result);
EXPECT_TRUE(h.valid());
EXPECT_GE(h.size(), 512u);
}
// Test handle release() method
TEST_F(ShmTest, HandleRelease) {
std::string name = generate_unique_name("handle_release");
shm::handle h(name.c_str(), 256);
ASSERT_TRUE(h.valid());
std::int32_t ref_count = h.release();
EXPECT_GE(ref_count, 0);
}
// Test handle clear() method
TEST_F(ShmTest, HandleClear) {
std::string name = generate_unique_name("handle_clear");
shm::handle h(name.c_str(), 256);
ASSERT_TRUE(h.valid());
h.clear();
EXPECT_FALSE(h.valid());
}
// Test handle clear_storage() static method
TEST_F(ShmTest, HandleClearStorage) {
std::string name = generate_unique_name("handle_clear_storage");
{
shm::handle h(name.c_str(), 256);
EXPECT_TRUE(h.valid());
}
shm::handle::clear_storage(name.c_str());
// Try to open - should fail or create new
shm::handle h2(name.c_str(), 256, shm::open);
// Behavior depends on implementation
}
// Test handle get() method
TEST_F(ShmTest, HandleGet) {
std::string name = generate_unique_name("handle_get");
shm::handle h(name.c_str(), 512);
void* mem = h.get();
EXPECT_NE(mem, nullptr);
// Write and read test
const char* test_str = "Handle get test";
std::strcpy(static_cast<char*>(mem), test_str);
EXPECT_STREQ(static_cast<char*>(mem), test_str);
}
// Test handle detach() and attach() methods
TEST_F(ShmTest, HandleDetachAttach) {
std::string name = generate_unique_name("handle_detach_attach");
shm::handle h1(name.c_str(), 256);
ASSERT_TRUE(h1.valid());
shm::id_t id = h1.detach();
EXPECT_NE(id, nullptr);
EXPECT_FALSE(h1.valid()); // Should be invalid after detach
shm::handle h2;
h2.attach(id);
EXPECT_TRUE(h2.valid());
// Clean up - use h2.clear() or shm::remove(id) alone, not both
// Option 1: Use handle's clear() which calls shm::remove(id) internally
id = h2.detach(); // Detach first to get the id without releasing
shm::remove(id); // Then remove to clean up both memory and disk file
}
// Test writing and reading data through shared memory
TEST_F(ShmTest, WriteReadData) {
std::string name = generate_unique_name("write_read");
const std::size_t size = 1024;
shm::handle h1(name.c_str(), size);
ASSERT_TRUE(h1.valid());
// Write test data
struct TestData {
int value;
char text[64];
};
TestData* data1 = static_cast<TestData*>(h1.get());
data1->value = 42;
std::strcpy(data1->text, "Shared memory data");
// Open in another "shm::handle" (simulating different process)
shm::handle h2(name.c_str(), size, shm::open);
if (h2.valid()) {
TestData* data2 = static_cast<TestData*>(h2.get());
EXPECT_EQ(data2->value, 42);
EXPECT_STREQ(data2->text, "Shared memory data");
}
}
// Test handle with different modes
TEST_F(ShmTest, HandleModes) {
std::string name = generate_unique_name("handle_modes");
// Create only
shm::handle h1(name.c_str(), 256, shm::create);
EXPECT_TRUE(h1.valid());
// Open existing
shm::handle h2(name.c_str(), 256, shm::open);
EXPECT_TRUE(h2.valid());
// Both modes
shm::handle h3(name.c_str(), 256, shm::create | shm::open);
EXPECT_TRUE(h3.valid());
}
// Test multiple handles to same shared memory
TEST_F(ShmTest, MultipleHandles) {
std::string name = generate_unique_name("multiple_handles");
const std::size_t size = 512;
shm::handle h1(name.c_str(), size);
shm::handle h2(name.c_str(), size, shm::open);
ASSERT_TRUE(h1.valid());
ASSERT_TRUE(h2.valid());
// Should point to same memory
int* data1 = static_cast<int*>(h1.get());
int* data2 = static_cast<int*>(h2.get());
*data1 = 12345;
EXPECT_EQ(*data2, 12345);
}
// Test large shared memory segment
TEST_F(ShmTest, LargeSegment) {
std::string name = generate_unique_name("large_segment");
const std::size_t size = 10 * 1024 * 1024; // 10 MB
shm::handle h(name.c_str(), size);
if (h.valid()) {
EXPECT_GE(h.size(), size);
// Write pattern to a portion of memory
char* mem = static_cast<char*>(h.get());
for (std::size_t i = 0; i < 1024; ++i) {
mem[i] = static_cast<char>(i % 256);
}
// Verify pattern
for (std::size_t i = 0; i < 1024; ++i) {
EXPECT_EQ(mem[i], static_cast<char>(i % 256));
}
}
}