Hardware and Software Security
The first line in defense of cybersecurity is always the user (or YOU). Hardware and software come next. Although the two things work independently, they are indispensable to each other.
Hardware security has two parts: individual system security and wired or wireless network security. Individual system security depends on the type of processor, the programs used, and the basic core safety of the operating system that runs with the processor. Known vulnerabilities of a processor, memory chips, and even the additional drive or disks attached to a system are all causes of worry in terms of security. Systems we use at home are connected to our ISP’s network either via Wi-Fi or a wired connection. Wi-Fi requires a router and modem, which need to be secure with good passwords to limit or prevent any hacking attempts. Loose and easily guessed or default passwords are a cause for concern. The protocols (WPA, WEP, TKIP, and so on) we use for setting up the Wi-Fi router need to be looked at closely. Likewise, a lack of a firewall on these hardware devices can also make it easy for a hacker to break our individual networks.
Networked systems have their own problems with open ports, default passwords, switches, hubs, the router, balancers, and so on. If the organization maintains its own web servers, how secure are they? How secure are their server and ports? Are there some ports open, or do they allow anonymous logins? If the organization is expecting a large traffic flow on their web server, can their single server manage the traffic or would it need a load balancer to control the traffic? Usually a trusted computing base (TCB) is established with a standard setup for all users. This baseline setup decides which programs are given to the users by default (for example, Word, Excel, Outlook) and which are not. TCB usually consists of fully vetted end-user machines, application servers, web servers, database servers and clients, and so on.
Software security has two distinct problems: bugs arising from bad code are either syntactic or semantic errors and problems from attacks such as SQL injection, bad software design, and memory leaks. Poor coding with errors, weak cohesion and strong coupling between modules, not scanning the software, not testing the software for bugs, and regression errors are also causes of concern in software security. Logic bombs introduced purposefully by disgruntled employees are hard to find, but code reviews can clear those errors.
In general, software security must be included right from the design stage all the way to the very end of deployment in the software development life cycle. If the organization is developing software in-house, how good is the development team with security? Or if the organization is purchasing the software off the shelf, is the third-party software good enough? Is it tested to withstand a vulnerability-based attack? Does the third-party software get regular updates, known as patches and service packs, to fix any vulnerabilities? These factors need to be taken into account to avoid any hacks. A security reference monitor, part of the operating system, usually allows for logins and access between users and systems and the associated log files for auditing.
A generic developmental cycle in software has three different machines or environments known as development, test, and production servers. A programmer writing code or checking a new third-party software to be purchased is only allowed do these things on the developmental machine and servers. Once development is complete, the code in executable form is transferred to a test machine, where an independent tester takes the lead to check functional requirements of the program. Once development and testing have passed, a code review might be conducted to remove any logic bombs or some possibly visible semantic errors in code. At this point, additional security tests for vulnerabilities are performed. Only after these successful phases is the program deployed on the production server.