Delivering More With Less

Delivering More With Less
 

Introduction

 
Product maps need to be created after analyzing the technology and market trends. Technology changes need to be anticipated very early during software development.
 
Technical standards need to be created for product development and organization operations. A strategic plan needs to be developed for product development activities.
 
Technology and engineering resources need to be selected based on optimal values to ensure the deadlines are met for product development and customer requirements are realized. The product map needs to be correctly defined and documented. Senior Team members and architects need to be hired to plan and acquire the development & quality assurance team members.
 
The team will be needed for the execution of the technology strategy. Technology strategy has the creation of technology platforms, products, partnerships, and external relationships. To oversee research and development, a top-flight R&D team needs to be created. To create the code, unit tests, functionality tests, acceptance test cases, automated test cases, and documentation, metadata needs to be created by gathering functional capabilities and product design details.
 

Project Strategy

 
To create the metadata and generate the software project artifacts, product development tools, technologies, and approaches need to be leveraged. This will help in cutting developer coding time and improving the quality by testing the unique functional aspects of the product. The structural aspects will not be considered. Support capabilities should be added to the software like monitoring, logging, auditing, management, and other infrastructure features. While creating this metadata-driven architecture platform, usability needs to be factored in. Multichannel capabilities like mobile, web, social, rich, desktop, SMS, kiosk, and IVR should be added. Customization features should be part of the platform. The platform should have flexible and non-intrusive ways to customize and extend the functional capabilities of the product. On a separate note, Vertical specific solution accelerators can be created by gathering architectural requirements for core and verticals. The core infrastructure will have different product configurations, data interfaces, and industry standards compliance.
 
Delivering More With Less
 

Project Best Practices

 
Best practices used for delivering more with less are listed below,
  • The practicality of product ideas based on projected costs and sales potential, product architecture, and implementation needs to cover allowing future development and maintenance.
  • Created a highly productive developer environment measuring unit test coverage, code size, defect densities, and throughput with tools.
  • Use appropriate DevOps and CI/CD tools such as Ansible, Puppet, Chef, Jenkins, GitLab CI/CD. Circle CI, Bamboo CI, and Travis CI
  • Measure and tune the performance of key software components by using performance, monitoring, and code analysis tools.
  • Enhance the code quality with automation testing & automated code coverage tools
  • Maintain a good attrition rate in the team by implementing employee-friendly policies and practices in Engineering, operations, HR, and other support functions
  • Reduce Operational expenses while maintaining the SLAs by resource optimization and renegotiating contracts
  • Enhance the system availability from 95% to 97.5% by standardizing environments, upgrading software, monitoring, replacing poorly performing legacy components
  • Enhance the automation test coverage from 15% to 30% by creating automation test suites and thereby reducing regression effort
  • Enhance the code coverage from 0 to 25% by mandatory unit tests, continuous integration with DevOps tools like Jenkins.

Project Scope

 
Staged implementation based on application and high functional priority can be planned in the initial iteration. Iterative reworking of the entire codebase can be done to meet long term componentization and productization goals. Medium priority and low priority functionality can be planned for the next iterations. Core functionality can be built lightweight to start with to meet the needs of immediate market needs option and core can be evolved to long-term market needs in the second option plan. Constant checkpoints and feedback can be planned between customer and project teams.
 
Risks can be addressed in the plan to eliminate, moderate, or mitigate all significant internal and external risks posed to the deliverables, quality, schedule, and budget of this project in a cost-effective manner. Risks need to be identified very early. Risk assessment can be done for technical, project, and resource risks. Mitigation strategy should be addressed for each risk. Effectiveness was assessed during the project and exposure to risk can be reassessed.
 
Typically the project plan initially has tested for a single reference configuration only. The matrix of possible configurations of OS, Browser, and Database can be addressed in the project plan. The testing can be done in iterations starting with a reference configuration and in the order of priority the other configurations. The process can be changed from Traditional RUP/Agile to spiral process methodology which focuses on high priority functionality built first to de-risk the project very early. Managing risk and realizing high priority functionality is easy in a spiral process methodology.
 

Performance Analysis

 
The layered architecture can be analyzed for performance bottlenecks in the code after the development phase. Queries should be analyzed for performance optimization in schema definition and query SQL. Indices need to be created to make the queries faster in the schema. Cache management can be added if missing. The archival process should be put it into place to move the retained data (every 90 days) to the archive. For overall performance requirements, the system should use established best practices for achieving optimal performance, including Caching of static and/or commonly accessed data & parameters, use of connection and thread pooling, and similar techniques, reduction of network round-trips, use of stored procedures for DBMS access, and support for clustered database servers.
 
The ability to scale the number of users is critical to ensure the deployment of the system in large and geographically diverse client environments. In addition to applying the techniques described in General Performance above, the primary requirement as it relates to interactive scalability is to ensure the application fully supports load balancing in the server layers of the application. This ties the interactive scalability to the number of servers, which can be expanded out indefinitely.
 

Test Automation

 
Good test automation results in businesses realize the reduced test execution time. Reduced cost of test execution and an increase in ROI can be achieved easily by having the automated regression test suite execution. You can have reusability of test assets across organizations and ensure scalability and minimization of rework. Consistent and repeatable test execution and improved utilization of resources help in freeing up test engineers' time and helps them explore corner test conditions. Automation Approach consists of features such as predominantly record and playback, minimum scripting, development from scratch approach, regular maintenance, and disposable test scripts. Reusable Automation Test suite needs to have supports keyword and data-driven approach. Automation tests can be grouped by function categories and extensible across domains. They can be used by non-technical users to execute tests and monitor results without tool proficiency. This reusable framework help in facilitating customized reporting, functions, suites, applications, and reports that can be delivered through web, e-mail, and other formats. Regression test suites help in cutting down defects and ensure test coverage across the software application functionality and modules.
 
Delivering More With Less
 

Quality Assurance

 
Effective quality assurance focuses on sharply reducing the number of defects introduced into the software. Defect Prevention is an explicit focus during software development and implementation life-cycle. Mandatory change request management, including impact analysis, design, and test case reviews prior to coding is the best practices followed in the testing phase. QA team regularly performs root-cause analysis of defects in order to eliminate repetitive mistakes and trend analysis of quality metrics to measure & manage the quality of the software as it is being built. QA team focuses on quickly finding and eliminating those defects, ideally in the same phase they were introduced.
 
Phase Containment is an explicit focus during software development and the implementation of the life-cycle. Mandatory design reviews for all new and updated designs need to be performed. Mandatory peer code reviews of all new and updated code should be done. Random sample code-reviews need to be done by technical leads. Mandatory unit testing and recording of results should be done by the development team. Mandatory automated smoke testing is done prior to acceptance of build by testers and release notes should be provided to Testers.
 
The system should fully support the ability to establish and execute load testing for interactive, external integration, and batch processing components. System Management & Diagnostic Requirements are gathered for debugging, tracing, logging, and managing the performance of the applications. The scenarios like statistical Analysis Scenarios, health Monitoring Scenarios, and policy Condition Scenarios need to be supported.
 
Delivering More With Less
 

Change Management

 
Change is the only constant thing in life and especially in software projects. Multiple factors lead to change like extrinsic factors, business, operating environment, requirements, and regulations.
 
Business value can be measured using the measures such as the breadth of functionality, user satisfaction with timeliness and accuracy, support for overall business strategy and effectiveness of business integration. Technology efficiency can be measured using metrics like maintenance effort, degree of effort to extend and add functionality, level of support for effective & enhanced integration, degree of modularity & encapsulation of functional components, impacts on the end-user and the customer, and support for business agility and flexibility.
 

Software Configuration Management

 
Software Configuration management strategy needs to be planned to fully leverage current development tools, technologies and approaches to centralize and automate the production and maintenance of as much software code as possible. Focus need to be developer time on coding and testing the unique functional aspects of the product rather than on the structural aspects. The tools selected need to support within the software itself a full and rich test/build/deploy automation capability that allows us to continuously test, re-test, build and deploy our software as changes are made, thus taking a “continuous quality” approach to software engineering.
 
SCM tool is used by the development manager and the project leads to create the initial structure of the main branch and developers check it in the source control system. After the initial structure is ready it is branched in a development branch where the developers can start working on it. When the development is ready to be merged into the main branch the reverse integration is performed. Forward integration merge is performed from main to dev. Any conflicts should be manually resolved. The code in the dev branch is tested to ensure it passes all the unit tests and quality gates. Reverse integration merge is performed from dev to main. Once the main branch in the core team project is ready to be merged into the functional team project, forward integration is performed from the main in the core team project to the main or shared in the functional team project. Any development team from the functional team project can pull the changes in the shared code from the main branch when they are ready to integrate them.
 

DevOps and CI/CD

 
Continuous Integration (CI) tools can help in detecting integration issues as soon as the problem code is checked in the source control system. They continuously analyze code quality and provide detailed reports for unit test coverage, compliance with best practices and style, code duplications, etc. DevOps tools can automatically and continuously deploy (CD) the application on staging environments on the cloud or in-house where it can be tested manually. The build strategy need to have scheduled, continuous integration (CI), and on-demand. The scheduled builds are used to build and deploy the application every day in the cloud and in-house environments. The continuous integration builds perform only build and code quality analysis and is triggered by check-in in the source control system. The on-demand builds are used to perform operations like build and deployment on a staging environment when the environment is used for detailed testing and overriding the installation on scheduled intervals is not desirable. Series of code quality analysis tasks are performed as part of the automated builds. This includes running all unit tests, collecting statistics on code coverage, analyzing the code for violation of coding standards and best practices, analyzing code duplication, etc.
Delivering More With Less

Why? - Delivering More with Less

 
The strategy of delivering more with less is needed for bringing high-value and relevant business capabilities to the market very quickly. It helps to achieve high levels of client satisfaction and quality. The need to support customization and extensibility is that the product can be adapted to different client and market situations. This strategy might have a need to support tight, cohesive integration with other products and client internal systems. The product strategy might have product evolution aligning with market changes.


Similar Articles