3. előadás: QA modellek, statikus analízis intro
Software Development Lifecycle and Project Management Strategies
Overview of Previous Discussions
- Discussed the software development lifecycle and various project management strategies from the previous week.
- Emphasized understanding how project management operates within software development, including goal setting and methodologies for tracking progress.
Key Software Development Models
- Reviewed foundational development models such as Waterfall and V-model, focusing on their testing methodologies.
- Highlighted the importance of integrating testing at different stages in each model to ensure quality outcomes.
Risk Management in Software Development
- Introduced the Spiral methodology for managing high-risk developments with many uncertainties, aiming to mitigate risks effectively.
- Discussed monitoring levels of planning processes to ensure effective tracking of project statuses amidst varying complexities.
Big Bang Methodology
- Mentioned the Big Bang approach as a less structured method that exists in practice despite its challenges in being classified as a formal methodology.
Transitioning to Quality Assurance Models
- Shifted focus towards specific models relevant for developers concerning quality assurance practices during software creation. Emphasized practical applications over theoretical knowledge.
Quality Perspectives in Software Development
Introduction to McCall's Model
- Introduced McCall's model, which emphasizes defining software quality from multiple perspectives rather than a single viewpoint. This model was published in the 1970s or 1980s, indicating its historical significance in software engineering discussions.
Defining Quality Criteria
- Explained that each perspective includes functional requirements associated with factors contributing to successful performance within that perspective. Factors are linked to specific quality criteria essential for measuring success.
Metrics and Measurement
Understanding Software Development Perspectives
Overview of Key Perspectives in Software Development
- The discussion introduces three main perspectives in software development: Revision, Transition, and Operation. The order of these perspectives is not crucial.
- The Revision Perspective focuses on the concept of revisiting code to assess its usability and maintainability.
- In the context of the Revision Perspective, key factors include:
- Maintainability
- Flexibility
- Testability
- A developer considers a codebase good if new features can be added quickly or existing functionalities can be modified without significant effort.
- It’s emphasized that the quality of underlying code may not always correlate with how well it serves users; thus, internal code quality can sometimes be irrelevant from an external perspective.
Transition Perspective Explained
- The Transition Perspective addresses scenarios where software needs to operate under different conditions or environments.
- This perspective involves assessing how software can adapt when moved to different architectures (e.g., from Windows to Linux).
- Factors influencing this transition include:
- Portability
- Reusability
- Interoperability
- Examples are provided regarding transitioning applications across platforms, such as moving from desktop to mobile or adapting for cloud environments.
Importance of Portability and Reusability
- Portability refers to how easily software can be transferred between different systems or platforms (e.g., Android to iOS).
- Reusability relates to leveraging existing code for new applications. For instance, adapting a desktop application into a web application while maintaining core functionalities.
- Challenges arise when older technologies need updating; developers must determine what parts of the original codebase remain applicable in new contexts.
Interoperability and Integration in Software Development
Understanding Interoperability
- Interoperability refers to how easily an application can work within its environment and transition to another. It emphasizes the importance of integration capabilities.
- A typical example involves connecting with specific systems, such as ANF (a system mentioned), where hardcoded URLs can lead to issues if server configurations change.
Designing for Flexibility
- Properly designed connectors allow for easy adjustments when new integrations are required, demonstrating the importance of planning for future needs.
- When developing services like a webshop, it’s crucial that products and services integrate seamlessly with existing company systems.
Challenges of Integration
- Each company has unique interfaces, which complicates integration efforts. Developers must either create custom solutions or design software that allows third-party developers to build integrations.
- The ability to adapt quickly is vital; if a business acquires another entity, the cost-effectiveness of deploying existing solutions becomes critical.
Transitioning Between Systems
- Transitioning from one system to another should be efficient; if initial deployment costs are high, subsequent implementations should ideally be much lower.
- This transition requires higher-level planning and decision-making from managers rather than just technical execution by developers.
Operational Perspectives on Software Quality
- The operational perspective focuses on software functionality, including correctness (does it do what it's supposed to?), reliability (how consistently does it perform?), and efficiency (how well does it solve problems?).
- Integrity is highlighted as a key factor concerning internal security measures within software systems.
Quality Criteria in Software Development
- Factors related to operational perspectives include correctness, reliability, efficiency, and integrity—each contributing significantly to overall software quality.
- These factors can be communicated effectively between developers and stakeholders who seek improvements in software performance based on these criteria.
Organizing Quality Criteria
Metrics for Code Maintainability
Importance of Function Length
- The length of functions is proposed as a metric for maintainability, indicating that shorter functions are easier to manage and understand.
- A specific criterion can be set, such as limiting function length to 82 lines, which serves as a measure of the software's maintainability.
Simplicity in Functions
- Simplicity is emphasized as a key factor in maintainable code; functions should not perform too many tasks simultaneously.
- A potential metric for simplicity could be the maximum number of lines in a function; exceeding this may indicate complexity.
Modular Structure
- The modularity of code is highlighted as an architectural aspect that affects maintainability; it examines how well classes and packages are organized.
- Good architecture allows clear separation between backend and frontend components, enhancing overall project organization.
Measuring Bugs and Fixes
- The number of bugs reported after a release serves as an indicator of code quality; high bug counts may suggest issues with the software's robustness.
- Bug fix turnaround time is another important metric; shorter times imply better maintainability and ease of fixing issues.
Changeability and Extensibility
- Changeability refers to how easily new features can be added or existing ones modified, which is crucial for adapting to evolving requirements.
Understanding Software Metrics and Quality
Flexibility and Maintainability
- The discussion begins with the importance of flexibility in software development, emphasizing that it should be felt through guidelines rather than just metrics.
- Maintainability is highlighted as a key aspect, focusing on the ability to work with known elements while preparing for unknown future challenges.
Testability Metrics
- Testability is defined by two indicators: the amount of test code written and the coverage it provides, referred to as test coverage.
- An example illustrates that if 200 lines of unit tests cover 100 out of 500 lines of application code, this results in a 20% coverage rate.
Coverage Considerations
- Achieving high test coverage is beneficial but should not come at the cost of excessive additional code; a balance must be struck between quality and quantity.
- Transitioning to portability, criteria such as hardware and software independence are introduced as essential factors for evaluating software quality.
Portability Indicators
- The number of supported platforms serves as a significant indicator of portability; more platforms imply better adaptability.
- It’s crucial to assess relevant indicators based on the specific environment in which one is working, recognizing that different contexts yield different meanings for "portability."
Reusability Factors
- Reusability is linked to modular design; well-modularized software indicates good reusability potential.
- Various factors contribute to assessing quality from multiple perspectives, including reusability metrics like the number of reused components within a codebase.
Negative Indicators: Code Duplication
- A negative indicator discussed is code duplication; excessive duplication suggests poor generalization and modularity within the codebase.
How to Measure Software Modularity and Interoperability?
Measuring Reusability in Software Components
- The speaker discusses the challenge of quantifying how many components have been reused in software development, emphasizing the need for a program that can assess modularity.
- There are dedicated tools available for measuring various criteria related to software quality, which are often complex and not inexpensive.
Understanding Interoperability
- Interoperability involves connecting different systems, focusing on data exchange capabilities where information can be shared and understood between systems.
- An example is provided about a website that supports multiple languages through a data exchange process, allowing text to be exported for translation and then re-imported without direct interaction with external systems.
Integration vs. Data Exchange
- The distinction between integration and data exchange is highlighted; integration connects workflows while data exchange simply allows information transfer.
- A practical scenario illustrates how translation requests can be sent to multiple agencies, showcasing an integrated workflow that enhances efficiency.
Indicators of Integration Quality
- The speaker suggests considering the number of integrations as an indicator of system quality, drawing parallels with fitness tracking applications that seamlessly connect various services.
- It’s noted that successful integrations are designed to allow new applications to connect quickly without extensive manual effort from developers.
Factors Affecting Correctness in Software
- Key factors such as completeness, correctness, consistency, and accuracy are discussed regarding functional requirements in software behavior.
Understanding Software Reliability and Metrics
The Importance of Accurate Calculations
- There can be discrepancies between two accounting software systems, such as a 150-unit difference in calculations. This often arises from rounding errors or different calculation methods.
Defining Functional Requirements
- It is essential to define metrics for functional requirements, which are high-level and typically qualitative. A percentage can be calculated based on how many functional requirements have been successfully met.
Criteria for Software Reliability
- For software to be considered reliable, it must meet at least two critical factors: correctness and fault tolerance. Correctness ensures the software functions as intended, while fault tolerance addresses how well the system handles unexpected errors.
Fault Tolerance Explained
- Fault tolerance refers to the system's ability to continue functioning despite internal bugs or external issues like service outages. The resilience of a system is crucial when dealing with such failures.
Recovery Capabilities of Applications
- Recovery capabilities involve restoring an application to a previous state after an error occurs. This includes rolling back transactions in databases to maintain logical consistency with minimal data loss.
Measuring System Performance
- Key performance indicators (KPIs), such as Mean Time To Failure (MTTF) and Mean Time To Recovery (MTTR), help assess how quickly a system can recover from failures by analyzing logs for downtime and recovery times.
Efficiency Metrics in Resource Utilization
- Efficiency focuses on optimizing runtime and resource usage, including memory, disk space, CPU time, and network bandwidth. These factors contribute significantly to overall system performance.
Quality Criteria for Software Development
- Quality criteria should be established during development to measure aspects like CPU usage or response time effectively. These metrics should ideally be automatically measurable for ongoing assessment.
Perspectives on Software Success Factors
- Different perspectives exist regarding what contributes to software success; these include shared factors across various viewpoints that enhance user satisfaction and operational efficiency.
Establishing Measurable Criteria
Software Quality Models and Their Evolution
Introduction to Software Quality Models
- The discussion begins with a reference to a model from the late 1970s, highlighting its limitations in showing inter-hierarchical relationships among factors.
- The B quality model is introduced as similar to the previous model but adds structure by defining relationships between different elements.
Hierarchical Structures in Quality Assessment
- It emphasizes that software components cannot be measured in isolation; improvements in one area may negatively impact another.
- An example illustrates that optimizing for efficiency can compromise testability, showcasing the trade-offs developers face.
Trade-offs and Practical Examples
- A practical example of linked lists demonstrates how simple implementations can become complex due to optimizations, making them harder to maintain.
- The BOM model acknowledges these trade-offs, suggesting that achieving high testability might require sacrificing some efficiency.
FURBS Model Overview
- The FURBS model (Functionality, Usability, Portability, Supportability) is presented as an evolution of earlier models from 1987 and later updates in the 2000s.
- This model includes various criteria for assessing software quality from multiple perspectives.
Importance of User Experience
- The discussion highlights HP's focus on supportability and usability due to their customer interactions, marking a shift towards user experience considerations.
- Emphasizes that user experience must be prioritized; poor interface design can lead to operational failures in products like printers.
Conclusion: Evolving Perspectives on Software Quality
- Concludes with an acknowledgment of the need for diverse perspectives when evaluating software quality models.
Static Code Analysis and Quality Assurance
Overview of Quality Standards in Software Development
- The discussion begins with the mention of a model that outlines quality standards, emphasizing that these are not groundbreaking but rather guidelines that developers have created based on their experiences.
- Key quality criteria such as testability and accountability are introduced, highlighting their importance in software development. These criteria can be functional or non-functional.
Measurement Techniques for Software Quality
- The speaker emphasizes the need to measure various aspects of software quality through indicators derived from high-level criteria, which can be broken down into code-related expectations.
- A focus on static testing is established, indicating that there are specific metrics to monitor within quality assurance models.
Static vs. Dynamic Measurements
- Two primary methods for measuring application performance are identified: static measurement (where the application does not run during analysis) and dynamic measurement (where the code must execute).
- The session will primarily concentrate on static code analysis techniques, exploring what can be examined without executing the code.
Requirements for Static Code Analysis
- To perform static code analysis effectively, access to the application's source code is essential; however, this may vary depending on the analysis technique used.
- The speaker discusses scenarios where access to source code might not be possible, using examples like Facebook's code versus Microsoft Office's locally installed software.
Steps in Conducting Code Analysis
- An outline of steps involved in conducting static code analysis is provided. This includes identifying relevant indicators such as function length and naming conventions for parameters.
- Various tools exist for extracting these metrics from source code or binary files. Each tool serves different purposes based on whether it requires knowledge of the source language or operates solely on executable files.
Integration of Tools for Comprehensive Analysis
- There is an acknowledgment that running multiple separate tools can be cumbersome; thus, integrated solutions have been developed to streamline this process.
Understanding SonarQube and Static Code Analysis
Introduction to SonarQube
- SonarQube is a tool for quality analysis, promising comprehensive insights into code quality through various measurements.
- It collects results from different analyses and presents them in one place, requiring configuration for optimal use.
Steps in the Analysis Pipeline
- The first step involves a lexical analysis, which breaks down the code into tokens for further examination.
- This token-based analysis identifies elements like function names and operators, similar to what compilers do during code compilation.
Importance of Syntax Checking
- Static code analysis can identify issues such as missing punctuation that may not be caught until runtime.
- The process examines whether tokens meet predefined expectations regarding syntax and naming conventions.
Lexical vs. Syntactic Analysis
- Lexical analysis checks basic syntactical correctness without needing to understand the full context of the code.
- Syntactic analysis ensures that specific language rules are adhered to, verifying if constructs like
ifstatements are correctly formed.
Common Pitfalls in Code Structure
- Developers often overlook necessary brackets in single-statement blocks within conditional structures, leading to potential logical errors.
- Historical practices allowed omitting brackets for single-line conditions; however, this can lead to confusion in nested conditions where clarity is crucial.
Consequences of Improper Syntax Usage
- Failing to include brackets can result in misinterpretation of scope by future developers or even oneself when revisiting old code.
Syntax Expectations and Control Flow Analysis
Syntax Requirements in Programming
- The speaker emphasizes the importance of syntax rules, stating that they do not want single-line instructions without parentheses. This highlights a preference for clear structural coding.
- A discussion on syntactical expectations reveals that certain conditions must be met, such as requiring default cases in switch statements, which fall under the category of syntactic analysis.
Tools for Syntactic Analysis
- Various tools exist for syntactic analysis, often integrated into development environments or checked later in the process. This integration aids developers in maintaining code quality.
Understanding Control Flow Analysis
- Control flow analysis is introduced as a method to examine the structure of syntactically correct code by building an abstract syntax tree to visualize control structures.
- An example function is presented to illustrate control flow; it shows how instructions without branches appear as nodes in a graph.
Graph Representation of Code Execution
- The speaker explains that instructions without branching are represented as nodes with arrows indicating flow direction. This visual representation helps understand execution paths.
- The absence of branching means all operations will execute sequentially, leading to predictable outcomes within the control flow graph.
Handling Errors and Exceptions
- The discussion transitions to error handling using try-catch blocks, noting how these can affect control flow by introducing potential points where exceptions may occur.
- It is clarified that certain parts of code may throw errors (e.g., file operations), impacting how control flows through the program.
Identifying Unreachable Code
- Control flow analysis can identify unreachable code segments within a complete graph by analyzing conditions and determining if any branches cannot be accessed based on logical constraints.
- Examples are provided where conditional statements might lead to unreachable branches due to conflicting conditions, showcasing typical issues identified through control flow analysis.
Importance of Clean Code Practices
- The necessity for clean coding practices is emphasized; having unreachable return statements or redundant conditions can complicate logic unnecessarily.
Code Analysis and Error Handling
Understanding Code Conditions and Errors
- The discussion begins with the importance of defining correct conditions in code, particularly focusing on
ifstatements to avoid errors.
- An example is provided where a condition checks if
xis less than 5, but fails to address what happens whenxequals 5, highlighting potential oversight in logic.
- Static code analysis can identify such issues by flagging unhandled cases like
x = 5, prompting developers to reconsider their logic.
- Developers have options for handling these flags; they can either add explicit conditions or leave empty code blocks, which raises questions about coding practices and organizational standards.
- The conversation touches on how to manage flagged conditions effectively, suggesting that developers can mark certain warnings as non-critical if they choose not to address them immediately.
Importance of Code Review Processes
- A second review process is emphasized as crucial for identifying potential errors or confirming that no issues exist after initial analysis.
- The speaker notes that once a condition has been reviewed and deemed acceptable, it should not trigger further alerts in future analyses, streamlining the development process.