![]() | |
![]() ![]() |
![]() | Imperfect C++ Practical Solutions for Real-Life Programming By Matthew Wilson |
Table of Contents | |
Chapter 6. Scoping Classes |
6.2. StateThe safety that comes with scoping classes can be invaluable when acquiring resources that represent program state. A classic application of this is to scope the acquisition of synchronization objects [Schm2000]. Failure to release an acquired synchronization object has pretty obvious consequences—deadlock, crash, unemployment—so it's as well to make sure you're safe. A simple template definition would be like that shown in Listing 6.6: Listing 6.6.template <typename L> class LockScope { public: LockScope(L &l) : m_l(l) { m_l.Lock(); } ~LockScope() { m_l.Unlock(); } protected: L &m_l; }; This constrains the lockable type L to have the methods Lock() and Unlock(). Seems a reasonable first assumption, but there're plenty that don't have these method names, so we might generalize further (see Listing 6.7). Listing 6.7.template<typename L> struct lock_traits { public: static void lock(L &c) { lock_instance(c); } static void unlock(L &c) { unlock_instance(c); } }; template< typename L , typename T = lock_traits<L> > class lock_scope { public: lock_scope(L &l) : m_l(l) { T::lock(m_l); } ~lock_scope() { T::unlock(m_l); } // Members private: L &m_l; . . . For brevity I've skipped a step here, which is a practical measure to cope with the fact that you might want to use the template on types from other namespaces. We could have just provided a degenerate version of lock_traits which would call, presumably, lock() and unlock() methods on the instances to which it was applied. But if we had a lock class in another namespace, we'd have to specialize back into the lock_scope's namespace, and that's a pain (as described in the coverage of Shims in chapter 20). So lock_traits is defined in terms of the functions lock_instance() and unlock_instance(). Rather than specializing, with all the potential junk and effort that goes along with that, you can just define lock_instance() and unlock_instance() along with your class and Koenig lookup[3] (see sections 20.7 and 25.3.3) which will handle the resolution, as can be seen with the thread_mutex class implemented for the Win32 platform (see Listing 6.8).
Listing 6.8.class thread_mutex { // Construction public: thread_mutex() { ::InitializeCriticalSection(&m_cs); } ~thread_mutex() { ::DeleteCriticalSection(&m_cs); } public: void lock() { ::EnterCriticalSection(&m_cs); } void unlock() { ::LeaveCriticalSection(&m_cs); } private: CRITICAL_SECTION m_cs; . . . inline void lock_instance(thread_mutex &mx) { mx.lock(); } inline void unlock_instance(thread_mutex &mx) { mx.unlock(); } Now you can use your own synchronization class with ease.
thread_mutex s_mx;
. . .
{ // Enter critical region, guarded by s_mx
lock_scope<thread_mutex> scope(s_mx);
. . . // Do your thread-critical stuff here
} // Guard is released here.
So that's kind of nice, you say, I can scope critical regions of my code with any synchronization object I choose. However, the flexibility we've built into the model can take us further. In analyzing the performance of the threaded server I mentioned previously, it transpired that there were some critical regions that were too broad. Where this is the case, and the cost of effecting the exclusion locks is low compared with the cost incurred in contention (as is often the case of intraprocess locks), it can help performance by breaking up the single critical regions into smaller ones. Naturally, this has to be valid from a logical point of view. If you break the coherence of your application, it doesn't really matter how fast it is. The following monolithic critical region: typedef lock_scope<MX_t> lock_scope_t; { // Enter scope: either a function or an explicit block lock_scope_t scope(m_mx); . . . // Costly operation #1 . . . // thread-neutral operations #1 . . . // Costly operation #2 . . . // thread-neutral operations #2 . . . // Costly operation #3 } // release here can be broken into the three separate ones (see Listing 6.9): Listing 6.9.typedef lock_scope<MX_t> lock_scope_t; { // Enter scope: either function or explicit block lock_scope_t scope(m_mx); . . . // Costly operation #1 } // release here . . . // thread-neutral operations #1 { // Enter scope: either function or explicit block lock_scope_t scope(m_mx); . . . // Costly operation #2 } // release here . . . // thread-neutral operations #2 { // Enter scope: either function or explicit block lock_scope_t scope(m_mx); . . . // Costly operation #3 } // release here Danger lurks here, however. First, from a practical point of view it is all too easy for you—or rather, for a colleague, since you've not had chance to document the rationale for your changes and disseminate in triplicate around the development team—to insert code that needs to be protected into the code that lies between the three protected regions. CDIFOC![4]
Second, from a more theoretical point of view, what we are trying to effect is a temporary release of the locking for a defined period, followed by its reacquisition. Well, call me a didactic popinjay, but this sounds a lot like a situation in dire need of some RAII. The answer is as simple as it is elegant: lock_invert_traits<> (see Listing 6.10). Listing 6.10.template<typename L> struct lock_invert_traits { . . . // Operations public: static void lock(lock_type &c) { unlock_instance(c); } static void unlock(lock_type &c) { lock_instance(c); } }; In this class, the locking is inverted by simply swapping the semantics of lock() and unlock(). Now we can change the critical region code back into one, and insert noncritical regions within it (see Listing 6.11). Listing 6.11.typedef lock_scope<MX_t> lock_scope_t; typedef lock_scope< MX_t , lock_invert_traits<MX_t> > unlock_scope_t; { // Enter scope: either function or explicit block lock_scope_t scope(m_mx); . . . // Costly operation #1 { // release here unlock_scope_t scope(m_mx); . . . // thread-neutral operations #1 } // Re-enter main scope . . . // Costly operation #2 { // release here unlock_scope_t scope(m_mx); . . . // thread-neutral operations #2 } // Re-enter main scope . . . // Costly operation #3 } // release here There are countless other state-scoping classes. We'll see a glimpse of a personal favorite, a current directory scope class, in section 20.1. I've written a few of these in my time, and they're immensely useful. File processing tools use them for descending into subdirectories to process the files there, and then ascending back to the starting position. The scoping is not without issue, however, since current working directory is usually an attribute of the process, rather than the thread, so it is possible for multithreaded use to cause nasties, although this is a function of the program's behavior rather than of the directory scope class. Furthermore, changing directory is something that is ripe for failure as it's all too easy to get an invalid path (user input, changing file-system, etc.). As with most things, consultation of the documentation prior to judicious use is called for. ![]() |
![]() | |
![]() ![]() |