Security Terminology Learning-Intermediate


Data anonymization

Data anonymization is a technique of protecting private or sensitive information by erasing or encrypting identifiers that link the data to an individual.

PII, such as names, social security numbers, and addresses, using a data anonymization technique that preserves the data while concealing the source.

Data anonymization reduces the risk of accidental leakage of PII data, and if a data breach occur, the stolen information will be of no use to attackers.

There are a number of Data anonymization techniques:

  1. Masking – concealing information with changed values. A database can be modified using methods like character shuffle, encryption, and word or character substitution by creating a mirror replica of the original database. For instance, you may use a symbol like “*” or “x” to swap out a value character. Reverse engineering or detection are made impossible by data masking.
  2. Data pseudononymization – a de-identification and data management technique that changes private identities to fake IDs or pseudonyms, such as changing “John Doe” to “Martin Crowe.” The updated data can be used for training, development, testing, and analytics while maintaining data privacy thanks to pseudonymization, which also maintains statistical accuracy and data integrity.
  3. Data Generalization- eliminates information on purpose in order to make it less recognizable. Data can be transformed into a range of values or a large area with suitable boundaries. The goal is to remove some identifiers while maintaining a certain level of data accuracy.
  4. Adding synthetic data – information that was generated by an algorithm and is unrelated to actual events. Instead of modifying the original dataset or using it as is and putting privacy and security at risk, synthetic data is used to create artificial datasets. Building statistical models out of the original dataset’s patterns is the procedure. To create the fake data, you can utilize standard deviations, medians, linear regression, or other statistical methods.
Non-Repudiation

Non-repudiation makes sure that no party can refute that it transmitted or received a communication using encryption and/or digital signatures, or that it accepted certain material. It also cannot deny the authenticity of its signature on a document.

Nonrepudiation is one of the foundation of information assurance (IA), which is the practice of managing information-related risks and protecting information systems and enterprise networks.

Cryptography is used to achieve nonrepudiation.

Digital signatures in online transactions make assurance that a party cannot subsequently dispute the sending of information or the validity of its signature. In public key cryptography, a digital signature is generated using the private key of an asymmetric key pair and validated using the matching public key.

Only the owner of the private key has access to this key, making it impossible for anybody else to use it to sign documents electronically on their behalf. This provides nonrepudiation, making it impossible for someone to subsequently dispute delivering the signature.

Transport Layer Security (TLS)

Transport Layer Security, or TLS, is a widely used cryptographic protocol designed to provide privacy and data security for communications over the Internet. A primary use case of TLS is encrypting the communication between web browsers and web servers. TLS can also be used to encrypt other form of communications such as email, IM, and voice over IP (VoIP).

TLS is evolved from the Secure Socket Layers (SSL) which was originally developed by Netscape Communications Corporation in 1994 to secure web sessions. SSL 1.0 was never publicly released, whilst SSL 2.0 was quickly replaced by SSL 3.0 on which TLS is based.

How does TLS function?

When sending data securely, TLS utilizes a combination of symmetric and asymmetric cryptography because it offers a good balance between performance and security.

Symmetric cryptography encrypts and decrypts data using a secret key that is shared between the sender and the receiver. This key is normally 128 bits in length, but preferably 256 bits. Symmetric cryptography is computationally efficient, but because it uses a shared secret key, it must be communicated in a safe way.

The TLS handshake is the sequence used to start a TLS connection. The TLS handshake between the user’s device (sometimes referred to as the client device) and the web server starts when a user navigates to a website that employs TLS.

The user’s device and the web server: during the TLS handshake

โ€ข Indicate the TLS version (TLS 1.0, 1.2, 1.3, etc.) they’ll be using.
โ€ข Select the cipher suites they’ll employ.
โ€ข Utilize the server’s TLS certificate to verify the server’s identity.
โ€ข After the handshake is over, create session keys for encrypting messages between them.

A cipher suite is established for each communication session during the TLS handshake. The shared encryption keys or session keys that will be used for that specific session are among the parameters that the cipher suite, a collection of algorithms, defines. TLS uses a technique called public key cryptography to establish the matching session keys across an unencrypted channel.

Penetration Testing

A penetration test or pen test or ethical hacking is an authorized simulated attack performed on an information system to assess its security position.

These penetration tests are often carried out by ethical hackers or pen testers.

Penetration testers use similar tools, attack techniques, and processes as attackers to find and simulate the business impacts of weaknesses in a system.

Penetration tests usually simulate a variety of attacks that could threaten a business. They can examine whether a system is robust enough to withstand attacks from authenticated and unauthenticated positions, as well as a range of system roles. With the right scope, a pen test can dive into any aspect of a system.

Different types of pen testing Strategies

  1. White box testing – gives testers all the information about a company’s system or target network, evaluates the product’s code, and internal structure. Open glass, clear box, transparent, and code-based testing are further names for white box testing.
  2. Black box testing – is a kind of functional and behavioral testing in where testers are not provided any background information on the system. In black box testing, when a real-world attack is conducted to determine the system’s weaknesses, organizations frequently employ ethical hackers.
  3. Gray box testing – is a hybrid of black box and white box testing methods. It gives testers access to low-level credentials, logical flow diagrams, and network maps, among other system information. The basic goal of gray box testing is to identify potential functional and coding problems.

Plan of pen testing

Pen testers often follow a structured approach that consists of the following actions to accomplish this:

  1. Reconnaissance – In this phase, the pen tester gathers as much information as possible about the target from public and private sources to plan the attack strategy. Sources include internet searches, domain registration information retrieval, social engineering, nonintrusive network scanning, and sometimes even dumpster diving. Reconnaissance may vary based on the scope and objectives of the pen test; it can be as simple as making a phone call to walk through the functionality of a system.
  2. Scanning – After reconnaissance, they use tools to examine the target website or system for weaknesses, including open services, application security issues, and open source vulnerabilities. Pen testers use a variety of tools based on what they find during reconnaissance and during the test.
  3. Gaining access – Attacker goals may include data theft, alteration, or deletion; the transfer of cash; or even just reputational harm to a business. Pen testers use the appropriate tools and methods for each test scenario to enter the system, whether through a vulnerability like SQL injection or by malware, social engineering, or another method.
  4. Maintaining access – Pen testers must maintain connectivity with the target long enough for their simulated attack to succeed once they have gained access to it in order to exfiltrate data, change it, or exploit functionality. It is important to show the possible impact.
  5. Penetration testing can be applied across many areas in the information systems space, such as:
    a. Web Application penetration testing – To find weaknesses in online applications, websites, and web services, web application penetration testing is carried out. Pen testers evaluate the design, security protocol flaws, and code security of an application.
    b. Network penetration testing – In this instance, the penetration tester examines a network environment for security flaws. The two categories of network penetration testing are external and internal
    c. Mobile Device penetration testing – They are a tempting target for bad actors due to the amazing amount of mobile applications that are now accessible on the market. Penetration testing on mobile devices is crucial to the overall security posture. It assists in evaluating the security of a mobile device and its apps, locating vulnerabilities, and identifying coding errors.
    d. Social Engineering penetration testing – In a social engineering test, the goal is to persuade staff members to divulge confidential data or grant access to the company’s systems. Penetration testers can use this information to determine how susceptible the company is to frauds and other social engineering hacks.
SAST

A widely used Application Security (AppSec) tool called Static Application Security Testing (SAST) searches the source code, binary, or byte code of an application. It is a white-box testing tool that helps address underlying security problems by determining the source of vulnerabilities.

By giving quick feedback to developers on problems brought into code during development, SAST lowers security risks in programs. With real-time access to suggestions and line-of-code navigation, it assists developers in learning about security as they work, facilitating quicker vulnerability detection and collaborative auditing. This makes it possible for developers to write more secure code, which results in a more secure program and less of a need for frequent upgrades and software modernization.

Advantages

  1. Early detection of vulnerabilities
  2. Detection of Common vulnerabilities such as OWASP and CWE

Disadvantages

  1. Not capable of finding vulnerabilities during runtime
  2. High false positives
  3. Time-consuming
  4. Reports gets outdated quickly as code gets changed frequently

Few Popular SAST tools

  1. Veracode
  2. HCL AppScan
  3. Checkmarx CxSAST
  4. SonarQube
  5. Fortify Static Code Analyzer
DAST

In dynamic application security testing (DAST), which is a kind of AppSec testing, testers look at a running application without having access to or insight into the source code and without knowing how the application interacts with the system inside.

This “black box” testing analyzes an application’s operating state from the outside in and keeps track of how it responds to simulated assaults launched by a testing tool. Applications’ reactions to these simulations may be used to ascertain if they are secure and resistant to actual malicious attacks.

The DAST tool determines if an application has a specific vulnerability based on how it reacts to different inputs. An exploitable vulnerability is present, for instance, when a SQL injection attack grants unauthorized access to data or when an application fails as a result of erroneous or corrupt input.

Advantages

  1. Finds runtime issues
  2. Low false positives
  3. Programming language independent results

Disadvantages

  1. Identifies vulnerabilities after development
  2. Might not detect vulnerabilities such as backdoors, logic bombs and vulnerabilities specific to business use cases.

Few Popular DAST tools

  1. Rapid7
  2. AppScan
  3. Accunetix
  4. PortSwigger
  5. Qualys
  6. Checkmarx
IAST

Code is examined for security flaws while an application is operating using the IAST application testing technique. During a test, IAST tools place agents and sensors inside of applications to find problems as they arise. To detect application vulnerabilities, the application can be tested manually or automatically.

The IAST tool will highlight the sections of code that contain vulnerabilities to assist the user in locating coding errors. The developer can see what code has to be changed to close the vulnerability by highlighting it.

Advantages

  1. Low false-positive rate
  2. Quick testing
  3. Simple to deploy
  4. Pinpoints the source of vulnerability
  5. On-demand feedback
WAF

A web application firewall (WAF) is a special type of firewall that monitors, filters and blocks Hypertext Transfer Protocol (HTTP) based traffic as it travels to and from a website or web application.

A WAF can be deployed as a network based, host based or cloud based. It is often installed along a reverse proxy and placed in front of one or more websites or applications. Running as a network appliance, server plugin or cloud service, the WAF inspects each packet and uses a rule base to analyze Layer 7 web application logic and filter out potentially harmful traffic that can facilitate web exploits.

Enterprises frequently utilize web application firewalls as a security measure to guard against malware infections, impersonation attacks, zero-day exploits, and other known and unknowable threats and vulnerabilities.

Typical WAF approaches for defense:

  1. Blacklisting – To stop harmful online traffic and guard against vulnerabilities in websites or web applications, blacklisting employs pre-set signatures. It is a set of guidelines for identifying malicious packets. Because they receive a lot of traffic from unexpected IP addresses that aren’t known to be either malicious or benign, blacklisting is more suitable for public websites and web apps. The disadvantage of blacklisting is that, in contrast to using trusted IP addresses by default, it consumes more resources and requires more data to filter packets based on specified criteria.
  2. Whitelisting – By default, the WAF rejects all requests and only accepts those that are known to be trustworthy. A list of IP addresses believed to be secure is provided. Blacklisting uses more resources than whitelisting does. Whitelisting has the drawback of potentially blocking good traffic without your knowledge. It can be effective and cast a large net, but it can also be inaccurate.
  3. Hybrid – A hybrid security model combines aspects of blacklisting and whitelisting.

Benefits of a typical WAF

  1. Defense against common web attacks
  2. Logging and Monitoring application access traffic
  3. Application profiling to differentiate between legitimate and illegitimate requests
  4. Provides AI-enabled traffic pattern detection and prevention
  5. Provide defenses without changing the source code
Non-disclosure agreement

In a legally binding contract known as a non-disclosure agreement (NDA), also called a confidentiality agreement, one party agrees to divulge to another party confidential information about its products or business in exchange for the receiving party’s promise not to share the information with anyone else for a predetermined period of time. By clearly defining what information must be kept secret and what information can be disclosed or made public, NDAs are used to safeguard sensitive data and intellectual property (IP).

Normally, NDAs are agreed to at the start of a corporate partnership. A NDA may cover any information, including test findings, system specs, customer lists, and sales data. Information leaks that violate the NDA are regarded as breaches of contract.

Important components of an NDA include:

  1. Participants’ names are mentioned
  2. What is deemed confidential is defined
  3. the commitment to confidentiality’s duration
  4. Exclusions from the confidentiality clause
Audit Trails

Audit trails keep a record of system activity by user activity of systems and applications as well as by system and application activities. Audit trails may be used in conjunction with the right tools and procedures to help find application faults, performance issues, and security violations.

A typical audit trail record contains the following:

  1. Who: User or the application program and a transaction number.
  2. When: Date and time
  3. Where: Location of user or terminal
  4. What: Data that is being worked upon or is modified.

Benefits of maintaining audit trails

  1. Easy verification: It has been compulsory by the government especially for large businesses to perform an audit at least once a year by an independent third party, if an audit trail is already maintained, it will reduce the job of the external auditor to just verify if all the transactions mentioned on the audit trail are valid or not. This reduces the time and money spent by the organization on external audits while making the job of the auditor less tiresome.
  2. Fraud prevention: Fraud is easily prevented by maintaining an audit trail, if any irregularities occur within the system, they can be easily recovered, also the employees wonโ€™t dare to do any scam as they know that the audit trail will make things clear. External frauds can be averted if the security is made tight and hard to break in.
  3. Easy recovery: In case of any disaster, all the necessary information can be backed up with the help of audit trails.

Online audit log access should be tightly restricted. However, security and/or administration staff who maintain logical access duties may not need access to audit logs. Computer security managers and system administrators or managers should have access for review purposes.


Also Read: Terminology-Advanced, Terminology-Basic, Security Terminology Section

Suggested Exercises: General Security Concepts and Terminology Tests