Local Area Network: Foundation Concepts, Hardware and Protocols

Subject: Tech & Engineering
Pages: 10
Words: 2948
Reading time:
11 min
Study level: PhD

BOTS and Hacking

BOTS are simple DDOS exploitation programs that attack in a unique manner. These programs when remotely installed are able to send commands and execute accordingly by the hacker. The attacks represent the mainstream of DDOS and work through the centralized control of distributed zombies or bots. The attack is distributed but automated as each of the automated bots that attacks could be ordered to begin by one person. Although it is a DDOS, in terms of control it acts as a centralized attack. Through these methods individual hackers can attack the largest online enterprises with extremely good chances of success (Jordan & Taylor, 2004, p. 76).

Hacking is a broad term that means breaking into a computer unethically or for unauthorized purposes. However hacking involves technology’s potential within countercultural and oppositional communities. Initiated in mid-1990s, hacking’s technological expertise had become, on the one hand, increasingly co-opted by the commercial mentality of the pre-dot com-bust. Internet ‘industries’ and, on the other hand, was equated largely with illicit, illegal or unwanted computer intrusion (what hackers tended to call ‘cracking’). Allowing for disputes over exact times and terminology, hacking coincided with the lowest point of hacking’s originally uncontested countercultural status.

Expanding computer companies hired computer technicians in their thousands, effectively both creating and absorbing the type of computer-trained individuals who previously might have been found only in hacking subcultures. Hacking and hackers had become integral to multi-million dollar businesses where the micro serfs had arrived. Hacking is viewed in negative terms due to its overwhelmingly negative association with malicious computer intrusion. The media’s interpretation of the word ‘hacker’ became that of someone who illicitly, even maliciously, took over someone else’s computer.

Historical hacking techniques involve open file sharing system, bad or weak passwords due to their default authentication method on most systems and unwise programming techniques. Hackers, along with viruses, can be portrayed as an external threat to security against which computer security professionals and their products are needed as a safeguard. At the same time, however, there also seems to be an implicit recognition that computer systems are inherently susceptible to bugs and intrusions but that some sort of social solution to such vulnerabilities is more realistic than finding the necessary technical resources to fix the problems (Taylor, 1999, p. 117).

In other words, hackers may provide a useful reminder of the need for improved security, but there is a practical limit to how far this usefulness can be recognised and acted upon outside of a punitive framework.

Bots are more destructive than hacking because since both are of same nature, i.e., to destroy the network through malicious content is the main intention of both, but there is a little difference in the way of application. Bots are more destructive since they are machine-driven and command-oriented while hacking is also machine-oriented but is driven by a human user or manually. Hacking can not be necessarily for negative purposes while BOTS are command-oriented programs aimed at only destroying the system. Any company or agency can hire a hacker in order to secure their network infrastructure, while BOTS cannot be applied positively.

This way the company can make good use of hackers while they work in a pace and reporting only one or two vulnerabilities. From a business perspective, hackers might not understand all the problems vulnerability can cause, therefore they can exploit the vulnerability and make the company lose money or customers, but they may not clearly communicate to management the business processes affected by the vulnerability, exactly how these processes can be affected, or what countermeasures are needed to protect the system from these vulnerabilities.

Various Attacker Methods

Attacker methods that are most commonly deployed are Wireless hacking, firewall scanning and DDOS attacks whereas software hacking methods include getting access to remote control insecurities and using advanced techniques like session hijacking, back doors, Trojans and cryptography. Web hacking includes webserver attacks, buffer overflows and web application hacking.

Wireless Footprinting: Wireless networks and Access Points (APs) are mostly used by hackers as they are the easiest and cheapest modes of targeting while some of the hardest to to detect and investigate. Often known by the name ‘war-driving’, it requires certain types of equipment to execute subset of attacks in addition to the required software (McClure et al, 2003, p. 440). Wireless cards, high power antennas, GPS devices and palm-size laptops or computing devices are used in wireless footprinting.

Firewall Scanning: Since firewalls are more secure than any other type of scanning countermeasure, therefore a hacker works around the firewall by exploiting trust of the company or the employee. An attacker simply maps the firewall, understands the weaknesses of the firewalls deployed and makes attempt to exploit them.

DDOS Attacks: A denial of service attack is aimed to deprive the organization of the valuable resources it is authorized to be able to use. In today’s world, DoS attacks are those that prevent you from using your computing resources, whether it is your mail server, Web server, or database server. DoS attacks are usually intentional, malicious attacks against a specific system or network where the attacker might have a personal grudge against the company or might just want to target a high-profile organization (Andress, 2003, p. 7). DoS attacks also can be caused accidentally by misconfigurations or inappropriate network use resulting in unavailable resources.

Dos can also result when the use of streaming media and peer-to-peer technology is common and therefore cause overloading network traffic to the point that legitimate business transactions cannot be processed. Many Dos attacking methods are discovered by using daily applications as applications are analyzed for security weaknesses.

Session hijacking, back doors, Trojans and cryptography: Session hijacking or TCP hijacking program allows a packet to be inserted into a system, while enabling command to be executed on the remote host. Since this type of attack requires a shared media, attackers by using hijacking tools attempt to watch and take over a connection. Back doors are powerful attacking techniques through which targeted attackers create mechanisms to quickly regain access to destination. Trojans help the attackers to install malicious software that exploits the remote control back doors. Cryptographic attacks exploit the confidentiality of the data.

Employees are most dangerous for attacking the network because they have full access and control over the company’s information or data and even if they don’t have, employees are well aware of the company’s confidential data. Some of the methods that can thwart attacker’s efforts are installing and configuring physical infrastructure properly, so that they can protect the company from a large percentage of attacks.

Such measures include installation and updating of firewalls, packet filtering system with screening router and intrusion detection. Securing service and client-side security would even make the company’s employees vulnerable if they ever decide to attack the system. Security maintenance and monitoring with security audits is the advanced method of protecting network from hacker attacks.

Access Control Plan

There are three components of an Access Control Plan (AAA), authentication, authorization and auditing.

Authentication: It requires checking the validity of a person while proving his or her identity. There are three components of authentication, applicant, credentials and verifier. It is the process of proving an individual who claims to be an applicant. Verifier needs information to verify in order to make sure who is accessing that information. For example a company’s employee is restricted to see company’s payroll data or product source code, and nothing else.

No doubt authentication is an important access control but it does not exist in a vacuum and works together with identification and authorization. In order to make authentication stronger, the applicant combines various methods into a single authentication technique, often referred to as multifactor or strong authentication. Two-factor authentication is most common where applicant by using a PIN code gets a Secure ID token to log on to the required network.

Same is the work of an ATM (Automatic Teller Machine) which applicant inserts while proving his identity. Common types of authentication include passwords and digital certificates authentication. Password authentication is easily developed and inserted in applications and with proper password selection and implementation, password authentication might provide adequate protection for the resources. Advanced authentication methods include the use of Biometrics that involves fingerprint scanning, iris scanning and face recognition.

Authorization: Authorizing the applicant does not mean that verifier grants access to the applicant to do anything or everything in the file-sharing system. There are certain limitations so that the applicant can only perform the desired task on the file. At this point when an applicant or a user requests a resource, the application gets forwarded as a request to the authorization server.

Now it is the job of the authorization server to check and sort out the applicable resources that applicant can access and forwards authentication information to a Lightweight Directory Access Protocol (LDAP), Remote Authentication Dial-in User Server (RADIUS), or Windows domain server (Andress, 2003, p. 285). The authorization server then sends the result-either allowing or denying the requested access-back to the application. This is a simplified description of the process that occurs.

Auditing: Auditing is the collection and maintenance of log files that gives the verifier opportunity to view log files, so as to collect or record the information about the various tasks people have performed on the file or on the system. Enabling auditing to monitor for specific database activities, one realizes the accumulation of a lot of data quickly. For example in Oracle, the SYS.AUD$ table can grow very large very fast where the table can even grow to a size where it invades the SYSTEM table space’s free space, creating database problems and leading many database administrators to completely disable auditing (Andress, 2003, p. 285).

Auditing on a per-object basis is an option, and some data might be so important that one thinks of having a record of, any time someone even tries to access it. Each event in auditing is inclusive of the details like the type or name of the event, the date and time of occurrence, whether or not it was successful, and any program names or filenames involved.

Working of a Firewall: Firewall works in a sequence of checking credentials where it checks checkpoints logs while delivering and allowing packets. Whenever a packet arrives the firewall examines the packet and places it either as ‘provable attack packet’ (PAP) or ‘unprovable attack packet’ (UAP). On identifying the packet as PAP, firewall discards it while identifying it as UAP it allows it to pass. In both the cases the firewall maintains a log file where the copy of the packet is stored which holds the packet information.

Working of Intrusion Detection System (IDS)

An intrusion-detection system (IDS) continuously performs monitoring networks in order to recognize attacks and computer systems for signs of intrusion or misuse. An IDS works by supplementing firewalls and identifies suspicious packets that somehow indicate attacks and keeps an eye on such packets by logging them. Although IDS do not drop these packets, they log them and closely monitor such packets’ activities. Some categories of IDS include application, host, network and integration. Some intrusion-detection products combine attributes of the IDS categories of which most commonly are those that are host and network-based ID sensors.

Like application-intrusion detection monitors information at the application level, examples are logs created by database servers, Web servers, application servers, and firewalls in which sensors placed in the application collect and analyze information. Most IDSs provide centralized management so that verifier sends the information collected to a centralized repository for review and analysis (Andress, 2003, p. 201).

The working of IDS is divided into three main classes; signature, statistical and integrity. Signature analysis looks for specific attacks against known weak points of a system which can be detected by analyzing traffic for known strings or for actions being sent to or performed on certain files or systems. Many commercial IDS products work by examining network traffic and seek well-known patterns of attack. For every recognized attack technique, the product developers code something, usually referred to as a signature, into the system. This can be a basic pattern match, such as /cgi-bin/password, indicating that someone is attempting to access the password files on a Web server.

The signature also can be as complex as a security state transition written as a formal mathematical expression. To use signatures, the IDS performs signature analysis on the information it obtains. This works by matching patterns of various system settings by tracking user activities against a database of known attacks.

Statistical-intrusion analysis involves observing deviations from a baseline of normal system usage patterns to which an IDS detects by creating a profile of system and watching for significant deviations from this profile. The IDS works while detecting intrusions by identifying significant deviations from normal behavior to which the classic model for anomaly detection involves metrics derived from monitoring system functionality.

IDS working comprises of comparing the baseline which it creates while relying on operating system audit trails. This baseline then creates a footprint of system usage which is easily recorded by operating system audit trails as they are an easy data source due to their availability on most systems.

By observing this baseline, the IDS periodically calculates metrics on company’s system’s current state and determines whether an intrusion, or anomalous behavior, is occurring. Intrusions are difficult to detect because such approaches do not have any fixed patterns to identify. An IDS does not have to rely on operating system audit trails; it can perform its own system monitoring and create system profiles based on the aggregate statistics it gathers. These statistics can be accumulated from many sources, such as CPU, disks, memory, or user activities.

Integrity analysis identifies whether a file or object has been altered and uses strong cryptographic hash algorithms, such as SHA-1, to determine whether anything has been modified (Andress, 2003, p. 198). There are many ways to implement these analysis methods; however the best approach is to combine methods in order to provide built-in redundancy.

CSIRT (Computer Security Incident Response Team) is trained to handle major attacks and disaster recovery and is necessary to rehearse in order to obtain an opinion of how the IT disaster recovery professionals would handle the situation. CSIRT is related to disaster recovery due to the expensive operations the company has to perform. In case there is no such IT team like CSIRT, it would be very expensive for the company to hire outsiders as IT recovery team.

Traffic Management Methods

Traffic management methods along with device security are quickly becoming one of the most important aspects of a security infrastructure. The growing reliance on networks and networked devices is not concerned about what methods are utilized to manage traffic and is increasing the number of nodes that must be secured and managed, which is usually not done effectively.

Quality of Service: The threat of denial-of-service attacks are the major concern in organization. QoS Reservations are those spaces and capacities that a switch and a transmission line holds for an application. QoS allows guarantees with minimum throughput and maximum latency (Raymond, 2007, p. 495).

QoS can be managed by effective bandwidth management. Active-active is the other type of failover setup and is gaining popularity as organizations seek inexpensive ways to increase bandwidth and throughput. There are routers like ‘Small Office or Home Office’ (SOHO) that include packet-filtering firewalls that small businesses and residential users can use for basic perimeter security. However, depending upon the bandwidth use and security concerns traffic; most small businesses eventually complement packet filters in the access router with a firewall appliance.

Priority: Applications are prioritized in Ethernet; even administrators prioritize the use of switches to provide the perception of increased security while managing network traffic. Network sniffers function by placing a computer’s network card in ‘promiscuous’ mode while in this mode, the network card captures all packets sent across its interface, even those packets not addressed to it. So, in a hub environment a system running a sniffer can capture all traffic that passes through it and could be managed by prioritizing the network devices. On a switched network, a sniffer can see only the traffic intended for that specific system.

Overprovisioning: This method is used by installing much more capacity than is needed and is wasteful due to the unnecessary reserved space. Besides, much labour is not deployed in this traditional approach.

Using the Internet for corporate communications causes some concern because traffic on the Internet is not encrypted and is freely available to any sniffer sitting on the network segment. Routers are often the first device an attacker encounters and while a router or switch resides between company’s firewall and Internet access provider or between company’s firewall and internal network forms a key security point that should be adequately protected. Compromise of these types of devices can provide valuable information to attackers about company’s network infrastructure or give them the opportunity to configure so-called man-in-the-middle attacks, such as rerouting traffic destined for company’s Web servers to an alternative system (Andress, 2003, p. 163).

These methods can be compromised by traffic shaping techniques, i.e., filtering and percentages of capacity methods. Filtering is the unwanted traffic at access switches while a percentage of capacity is the assigning of certain applications at access switches (Raymond, 2007, p. 496). A company while managing network traffic must be aware of the failures in performing threat analysis, since these failures cause risks for which an application-threat analysis should be performed in the design phase to help identify threats to the application. To properly implement security controls, you need to know what you are securing against.

References

Andress Amanda, (2003) Surviving Security: How to Integrate People, Process, and Technology: Auerbach Publications: Boca Raton, FL.

Jordan Tim & Taylor A. Paul, (2004) Hacktivism and Cyberwars: Rebels with a Cause?: Routledge: New York.

McClure Stuart, Scambray Joel & Kurtz George, (2003) Hacking Exposed, Network Security Secrets and Solutions: Fourth Edition.

Raymond, R. Panko (2007). Business Data Networks and Telecommunications; Prentice Hall.

Taylor A. Paul, (1999) Hackers: Crime in the Digital Sublime: Routledge: London.