This paper is co-authored by Vikas Kumar and Sanjay Goel. This article is the fourth in this series about software bugs. Vikas can be approached at (http://www.linkedin.com/pub/vikas-kumar/a/658/500)_________________________________________
In the previous three articles, we started discussing about software bugs. We proposed that software bugs can be classified based on the source of the misconception. These misconceptions can be related (i) programming fundamentals, (ii) operating systems resources, (iii) complier, (iv) database, or (v) software design.
This section lists common bugs which occur either due to design flaw of software architecture or due to lack of appropriate understanding of existing software architecture on programmers’ part. This lack of appropriate understanding can result in contractual violations among different modules of software system, thereby causing anomalous system behaviour. In this catalogue, we don’t claim to have captured all the misconceptions related to operating system resources and services. Hence, this catalogues is only partial.
1. Input parameters validationMany functions misbehave when they receive unexpected input parameter. There is no special handling for unexpected parameters in such function. This primarily happens because of ambiguity in responsibility for doing such handling. Software architecture can specify an interface layer for doing sanity on input/output data. Underlying modules can use this interface layer for getting data from various data sources such as Graphical User Interface (GUI), Command Line Interface (CLI), Database, Files, etc.
2. Error Handling
Philosophy for error handling must be clear and consistent across all modules in system. Every handling coding should have clean up or roll back facility to ensure that system remains same as it was before. Clean up facility should ensure freeing of dynamically allocated memory, unlocking of any lock acquired, undoing any other activity which did not complete because error condition was hit. Alternatively, error handling can be done before making any changes in system, thereby obviating the need for rollback or cleanup.
Lock provided by OS, as synchronization primitive, gives programmer the facility to avoid race condition while accessing shared resource. But, careless use of synchronization primitives can result in deadlock. This scenario involves one or more thread of execution and one or more resources, such that each thread is waiting for of the resources, but all recourses are held. The threads are waiting of each other to relinquish resource so none of them make progress and system hangs.
Typical deadlock scenario (Love, 2007)
|Acquire lock A||Acquire lock B|
|Try to acquire lock B||Try to acquire lock A|
|Wait for lock B||Wait for lock A|
For avoiding lock following practices are recommended
- Nested lock must always be acquired in same order.
- Preventing starvation by ensuring function always goes to completion.
- Avoid double acquire of same lock
- Design of lock should be simple and devoid of any race condition while acquiring lock.
Livelock is similar to deadlock in the sense that none of the process involved is able to progress. However, the state of process involved keeps changing. This happens when some algorithm detect deadlock and recover from deadlock. If more than one process takes this action, the deadlock detection algorithm can repeatedly trigger. This can be avoided by ensuring that only one process takes action (Lomet, 1980).
5. Reentrant function
Reentrant functions are those functions which can be called from different threads or process concurrently. If these functions maintain state by using either global or static data, possibility of unexpected behavior will be there. Indentifying all such functions and making appropriate changes to avoid usage of static and global data is necessary for consistent behavior
6. Memory fragmentation
Memory fragmentation happens in system when total available memory is sufficient for memory request but no individual free memory chunk is sufficient to satisfy the request. To avoid this scenario, each task can have its own memory allocator for which memory is allocated at the time task is created. This will ensure that any aggressive memory allocation and freeing in one task will not cause memory fragmentation issue in other tasks of system.
7. Memory leak
For inter process or inter thread communication generally memory is allocated and used as shared memory. Sometimes, the responsibility of freeing this shared memory is not clear which results in memory leaks.
Programming community is invited to suggest the gaps in this catalogue. Faculty members are encouraged to use this catalogue in their courses. We shall appreciate the feedback from working professionals, faculty members, and enthusiastic students.