The approach adopted for intrusive testing aligns with industry best practices such as OWASP, NIST SP 800-115, and PASSI (Chapters IV.4.4, IV.5, and IV.6). It follows these steps:

Framework Definition and Kick-off Meeting

Even though the evaluation scope is outlined during the pre-sales phase, it is customary to redefine the scope during the kick-off meeting with the client and, if possible, with their technical teams.

This is also the time to understand the evaluation context and the main security risks feared by the client. This allows focusing on specific vulnerabilities, certain application functionalities, or highlighting a specific scenario in the report.

Additionally, when relevant, an attacker profile will be defined to guide the tests performed and demonstrate the feasibility of a feared scenario.

Finally, it is verified that all prerequisites for testing have been provided, that the evaluation schedule is set, and that the points of contact are known.

By default and unless otherwise indicated by the client, no destructive tests are performed. In this regard:

  1. No denial of service attacks are conducted.

  2. Vulnerabilities that may have a destructive impact are identified but not exploited.

  3. Data deletion functions are tested only on test datasets.

In case of doubt, prior agreement is sought from the client.

Reconnaissance & Information Gathering

The service begins with a passive reconnaissance phase, during which publicly available information about each component within the scope is collected. Public information sources can include domain registrars, DNS records, search engines, public organization information (website, blogs, etc.), published documents, leaked databases, application source code available on public platforms (GitHub, SourceForge, etc.), and more.

Next, the active reconnaissance phase begins, which includes network scans and service discovery techniques. When a service is identified, all available information such as the technology used, its version, third-party components used, etc., is collected.

In the case of internal penetration tests, the network can also be monitored to identify certain technical information (IP addresses, MAC addresses, broadcast requests, etc.). However, no active Man-in-the-Middle attacks are performed without the explicit permission of the client.

Similar to destructive tests (i.e., Denial of Service - DoS), Man-In-The-Middle attacks can have uncontrolled impacts on an infrastructure.

For example, this type of technique can interrupt communication with SSL/TLS authentication, thereby disrupting an application or data processing flow.

Consequently, no active Man-In-The-Middle attacks such as ARP spoofing, ARP poisoning, LLMNR, or NBT-NS poisoning are performed without the explicit permission of the client.

Evaluation

The evaluation can be conducted using three methods: black-box testing, gray-box testing, or white-box testing, meaning without information, with partial information, or with all the information we need, including the source code of the evaluated application.

Regardless of the chosen method, all tests start in a black-box context to simulate the behavior of a real attacker on the targeted scope.

The tests are conducted in a way to identify security flaws that could affect all layers of the targeted scope, including:

  1. The network layer

  2. The network architecture/flow filtering mechanisms

  3. The protocols used

  4. The operating systems

  5. The SSL/TLS encryption layer

  6. The middleware components

  7. The application

  8. Any third-party components

In the case of web application penetration testing, the tests are conducted to identify all web vulnerabilities, including those listed in the OWASP Top 10.

When the evaluation scope includes an authentication mechanism, the client is asked about the presence of an account lockout mechanism. Depending on the response, the tests will include authentication attempts using default accounts, known weak passwords, or even brute force attacks.

The approach excludes certain specific tests that could degrade the performance of a service or result in a denial of service, unless requested by the client.

Particular attention is paid to tests that could alter data, such as:

  1. the creation/modification/deletion of accounts
  2. SQL injections in an UPDATE or DELETE query
  3. the creation/modification/deletion of files, etc.

In general, the client’s consent is always sought before exploiting any security vulnerability that could impact a component within the scope.

Most tests are performed manually. However, tools (such as Nmap, WFuzz, Patator, Metasploit, Burp, etc.) are also used. These tools are known and mastered by our consultants and validated at the company level. When using public exploit code, it is evaluated before use. In case of doubt, the client’s consent is sought before its use.

An evaluation is always time-bound. If we are unable to identify vulnerabilities exhaustively, our efforts will focus on identifying major vulnerabilities.

Audit report

At the end of the evaluation, a report is produced. This report can take different forms. By default, a comprehensive evaluation report in PDF format is produced. This includes:

  1. The context of the evaluation (scope, schedule, prerequisites, etc.)
  2. An executive summary
  3. A technical summary
  4. A list of all vulnerabilities with technical details
  5. The remediation plan
  6. A list of all identified hosts and services
  7. The limitations of the evaluation

Vulnerabilities are classified based on their severity. The severity of a vulnerability is assessed according to the following parameters:

  1. The prerequisites for exploiting the vulnerability (authentication, specific information, etc.)
  2. The complexity of the attack
  3. The impact on the scope or the company

The severity of a vulnerability is categorized into four levels: Critical, High, Medium, and Low.

Similarly, recommendations are classified based on:

  1. The complexity of the fix
  2. The improvement in the security level brought by the fix

This helps to define an overall action plan and assists the client in prioritizing the remediation of the identified flaws.

Results Presentation / Defense Meeting (Optional)

The evaluation methodology can include a formal presentation of the evaluation results to the management and/or technical teams.

The presentations are accompanied by a PowerPoint presentation.

Re-audit (Optional)

Once the fixes have been implemented, it is proposed to evaluate them to ensure that the security flaws have been corrected.

This also ensures that no new vulnerabilities have been introduced by the fix.