Programming in Practice Discipline


Programming in Practice is a discipline that systematically applies engineering principles to the program design, development, and implementation of algorithms, program text editions, and testing. It is a part of the software engineering discipline, which also involves project management, software distribution, maintenance, and evolution of software systems.

Programming in Practice can be deployed tanks to widely accepted best practice rules and design patterns for well-defined subdomains as follows: sequential programming, concurrent programming, real-time programming, parallel programming, and distributed programming.

The best practice rules refer to the established guidelines, methodologies, and techniques that are widely recognized as effective for creating high-quality software.

Some key best practices include:

  1. Requirement Analysis: Clearly understand and document the requirements before starting development to ensure alignment with user needs,
  2. Design: create a well-thought-out design that outlines the architecture, modules, and interactions of the software components,
  3. Modularity: divide the software into small, manageable modules to improve maintainability, scalability, and reusability,
  4. Code Reviews: regularly review code to identify issues, ensure adherence to coding standards, and promote knowledge sharing among team members,
  5. Testing: implement comprehensive testing strategies, including unit testing, integration testing, system testing, and many others to validate the functionality, performance, and reliability of the software,
  6. Version Control: utilize version control systems (e.g., Semantic Versioning 2.0.0) to track changes, collaborate effectively, and manage codebase history,
  7. Documentation: Maintain thorough documentation for code, APIs, and user manuals to facilitate understanding, usage, and future maintenance,
  8. Continuous Integration/Continuous Deployment (CI/CD): automate build, testing, and deployment processes to enable frequent releases and ensure code stability.

The main goal is to improve the quality, maintainability, and reliability of the programs while enhancing collaboration and productivity.

The development of design patterns activities is also a subject of programming in practice. The design patterns are reusable program parts dedicated to solving common problems encountered during program development. They provide templates or blueprints for structuring program text in a way that promotes modularity, flexibility, and maintainability. Design patterns encapsulate proven techniques for designing software architecture and implementation of specific algorithms.

For the object-oriented programming concept, there are three the following categories of design patterns:

  1. creational: patterns focus on the program design to control instantiation of types,
  2. structural: patterns focus on the composition of types to form larger structures,
  3. behavioral: patterns focus on the implementation of communication between types.

By applying design patterns appropriately, developers can improve the structure, flexibility, and maintainability of their software systems and finally avoid code duplication and decrease design complexity. Shortly, the main goal of programming in practice is to reduce development time and time to market to improve monetization of the software development effort.

Program life cycle

Design Time

At design time the computer program is developed to be compliant with a requirement specification in compliance with the best practice and employing appropriate design patterns. It includes selected algorithms implementation and testing. Implementation is a process of program text editing and testing according to the selected programming language and development environment.


A computer program is always executed in a surrounding context called an execution platform, for example, Common Language Infrastructure (CLI), or operating system to name only the most important. One of the responsibilities of operating systems is to protect computer programs from each other by running each program in an independent process. If a program fails, only that process is affected; programs wrapped by other processes can continue to perform. As a result, addresses in one process have no meaning in another process. In the managed environment, application domains (or logical processes) and contexts provide a similar level of isolation and security at less cost and with a greater ability to scale well than an operating system process.

Sequential Programming

Sequential programming principles refer to the concepts and techniques used in writing programs that describe the execution of instructions in a sequential, step-by-step manner. In other words, in sequential programming instructions are executed one after another in a predetermined order, typically from top to bottom. The original execution sequence may only change as a result of applying control instructions, like if, while, and many others.

Sequential programming principles form a foundation for most programming languages. Modern programming languages are developed also based on other techniques and concepts, like object-oriented programming, and strong typing.

Object-oriented programming (OOP) is a programming paradigm based on a reference type. The reference type at run-time is used to instantiate "objects", which can contain data (properties) and code (methods). The principles of object-oriented programming include:

  1. Encapsulation: refers to dealing with data and methods visible only internally in a single type,
  2. Abstraction: involves types that have been at least partially not implemented,
  3. Inheritance: allows a type to inherit properties, attributes, and methods from another type,
  4. Polymorphism: enables implementation of the same abstraction by many concrete types.

A programming language is strongly typed if it demands the specification of data types.

This section provides examples of selected design patterns related to sequential programming. It is worth stressing that in sequential programming sometimes the concurrent programming is applied implicitly if ever. An example is an asynchronous design pattern that usually is implemented using concurrent programming techniques but the code resembles sequential programming.

Concurrent programming

Multithreading or concurrent programming terms are used to refer to the programming pattern that allows writing a program to execute operations at run time as a result of nondeterministic events. Concurrency is when multiple sequences of instructions are run in overlapping periods. In other words, the instructions sequence execution is undetermined in advance. Thread is a type that may be used to represent a sequence of instructions in this scenario. Concurrent programming to be deployed requires appropriate syntax constructs.

Real-time Programming is also a kind of concurrent programming. In this programming technique additionally, the time notion must be taken into account as a factor determining the correctness of the program. This programming technique requires from the programming language and programming environment appropriate means to be applied.

The next example of concurrent programming kind is Parallel Programming, which must allow the description of simultaneous execution of program operations across multiple processing units, such as CPU cores in case the computer is equipped with a multi-processor. Usually, in the case of a multi-processor environment, the simultaneous execution is implemented implicitly as an embedded mechanism based on the concurrent programming concept implementation and embedded scheduling mechanism.

Distributed Programming

Distributed Programming involves designing and implementing computer programs that run on multiple interconnected computers, often referred to as nodes, and collaborate to achieve a common goal. Because by design, distributed programs must deal with the simultaneous execution of program operations across multiple processing units, distributed programming must deal with Parallel Programming.

Some key concepts in distributed programming include:

  1. Communication: nodes in a distributed system need to communicate with each other to share data and coordinate activities.
    • Interactive communication: is a communication pattern to describe an interaction between communicating entities where the selected one triggers sending a request and expects a response message from the other interconnected parties.
    • Reactive communication: is a communication pattern to describes an interaction between communicating entities where the selected one triggers sending a message and does not expect a reaction from the other parties receiving it. The triggered message send action must be non-blocking, It means that the message sending action should last as short as possible. It is a one-to-many interconnection relationship.
  2. Fault Tolerance: distributed systems must be designed to handle failures, such as network outages or node crashes. Fault tolerance mechanisms, like replication and redundancy, help maintain system functionality in the face of failures,
  3. Distributed Data Management: managing data across multiple nodes requires distributed data storage and retrieval mechanisms. Techniques like sharing, partitioning, and distributed databases are employed to ensure data consistency and availability,
  4. Consistency and Replication: achieving consistency in a distributed system, where multiple copies of data may exist, is challenging. Replication is often used to improve fault tolerance and performance, but maintaining consistency among replicas requires careful synchronization,
  5. Scalability: distributed systems should be scalable to handle an increasing number of nodes or users. Horizontal scaling (adding more nodes) and vertical scaling (upgrading individual nodes) are common strategies for achieving scalability,
  6. Security: security concerns, such as authentication, authorization, data integrity, data confidentiality, and data non-repudiation are crucial in distributed systems where data is transmitted across networks,
  7. Load Balancing: distributing incoming requests evenly across multiple nodes helps ensure optimal resource utilization and prevents individual nodes from becoming overloaded,
  8. Coordination and Consensus: distributed systems often require coordination and consensus algorithms to ensure that all nodes agree on certain decisions or states,

Understanding and applying these concepts is essential for developing robust, efficient, and reliable distributed systems that can scale and adapt to changing conditions in a networked environment.

Interoperability scenarios

Below there are a few selected scenarios requiring a decision, on which kind of programming must be applied. Of course sequential programming is assumed to be a default approach required in all cases.

Program Entities Interoperability

  • Synchronous: It is a programming pattern to describe an interaction between a programming entity and a method invoked by it where the method is executed synchronously. In other words, the further execution of the calling entity is postponed until the called one has been finished. Be design, the concurrent programming concept is not applicable to implement this interoperability relationship.
  • Asynchronous: It is a programming pattern to describe an interaction between a programming entity and an action called by it. The called action is executed simultaneously. In other words, the further execution of the calling entity is continued and synchronized with the called one after finishing. It requires concurrent programming.
  • Interactive: It is a programming pattern to describe an interaction between programming entities where the selected one triggers an action and expects a reaction from another interoperable party. If the triggered action is non-blocking the interaction may be implemented using a synchronous programming pattern, otherwise implementation using an asynchronous approach is recommended.
  • Reactive: It is a programming pattern to describe an interaction between programming entities where the selected one triggers an action and does not expect a reaction from it. It is a one-to-many programming entity interoperability relationship. This programming pattern may be used to implement the publisher-subscriber pattern. If the triggered action is non-blocking, which means that the interoperability action should last as short as possible, reactive programming may be implemented without concurrent programming, otherwise, concurrent programming must be applied.

Dependency Injection (DI)

Dependency Injection (DI) is a programming pattern where an abstraction defined as a part of the object-oriented programming concept is employed at run-time to provide an instance of a concrete type when the type is not visible or shall be not referred to for some reason at design time. In other words, the new operator is not applicable to create an instance in a location where this instance is to be used. The lack of possibility to deal directly with the concrete type in concern or avoiding deliberately direct access to this type for some reasons shall be recognized as a problem to be solved.

Below there are selected reasons why a concrete type is not visible or shall not be referred to in a location where it is to be used:

  1. the type of concern is defined in the programming layer above and shall not be used directly to comply with the separation of concerns and responsibility rules,
  2. the type of concern is defined in the not referenced project, for example, in the unit test project to provide testing data that must be not located inside of a shippable deliverable, and as a result, the type is not visible,
  3. the type of concern will be defined later for some reason, for example, to avoid waiting for the final implementation when program development work is forked to be conducted simultaneously by independent teams,
  4. the type of concern is defined out of the current solution, for example, it is a part of a plug-in,

By design, to implement a Dependency Injection design pattern only sequential programming is required.

Inversion of Control (IoC)

It is a programming pattern that may be used to describe calling unknown methods because their implementation is not or should not be visible in the location where they are to be invoked. The following examples show typical scenarios

  • an abstract method implemented elsewhere by a type that an instance is provided applying the dependency injection pattern,
  • a delegate object wrapping a set of methods assigned elsewhere to an event or delegate variable.

By design, to implement an Inversion of Control design pattern only sequential programming is required.

GUI Interoperability

Interactive: It is a user of a computer and computer user interface interoperability pattern where one party of this interoperability triggers an action and expects a reaction from the other one. An example of this kind of interoperability is a mouse clicking on a virtual button on the computer screen and expecting a reaction from the program responsible for rendering this button on the screen. According to the definition, it is a blocking relationship, hence the round trip latency should be minimized to keep the user interface responsive and react smoothly to the triggered action.

Reactive: It is a user of a computer and computer user interface interoperability pattern where one party of this interoperability triggers an action and doesn't expect a reaction from the other one. An example of this kind of interoperability is updating the current time on the watch control rendered on the screen. If this kind of operation is not blocking it doesn't need concurrent programming. On the other hand, pressing a virtual button on the computer screen without any visual reaction is also reactive interoperability but it should be avoided because it makes the user interface nonresponsive - it could be recognized as no response to the user demand.

Because usually the GUI should be recognized as a critical section, its interoperability with any underlying activity must be synchronized. It depends on the framework used to implement the interoperability.

If the user interface is implemented as a process executed on an independent computer additionally the distributed programming must be applied.

Client-server interoperability

Client-server interoperability refers to the ability of clients and servers to communicate and interact effectively despite differences in their implementations, protocols, or platforms. In a client-server architecture, clients send requests to servers, which process those requests and return responses. Interoperability ensures that clients and servers from different vendors or running on different systems can understand and communicate with each other seamlessly.

Key aspects of client-server interoperability include:

  • Standard Protocols: both clients and servers adhere to standard communication protocols, such as HTTP, TCP/IP, or WebSockets, to ensure compatibility and interoperability. Standard protocols provide a common language for communication, enabling clients and servers to exchange data reliably,
  • Data Formats: both clients and servers agree on standard data formats for encoding and decoding messages. Using standard data formats ensures that data can be serialized and deserialized correctly on both ends of the communication,
  • API Contracts: servers expose well-defined APIs (Application Programming Interfaces) that specify how clients can interact with them. API contracts define the structure of requests and responses, including the expected parameters, formats, and semantics. Clients adhere to these API contracts when sending requests to servers, ensuring compatibility and consistency,
  • Security Mechanisms: interoperable client-server communication requires robust security mechanisms to protect data integrity, confidentiality, and authenticity,
  • Versioning and Compatibility: as client and server implementations evolve, maintaining backward compatibility and handling versioning becomes crucial for interoperability. Versioning strategies, backward compatibility checks, and graceful degradation mechanisms help ensure that clients and servers can communicate effectively across different versions,

By addressing these aspects, client-server interoperability enables heterogeneous systems to collaborate and exchange information seamlessly, facilitating the development of distributed applications and services that can interoperate across different platforms, technologies, and environments. This requires distributed programming.

Similar Articles