Updating headings on every page

This commit is contained in:
2025-08-25 13:58:31 -04:00
parent 9630a14124
commit d51591cd05
17 changed files with 136 additions and 136 deletions

View File

@@ -4,13 +4,13 @@ date: 2019-08-01
draft: false
---
# Security Blog
## Security Blog
This blog is various summaries of minor research, reading, and independant learning in regards to computer security.
Mostly this blog is to satisfy the requiremnts for my Security+ certificate.
# Cert ID
## Cert ID
Security+ ID: COMP001021281239

View File

@@ -4,43 +4,43 @@ date: 2023-11-13
draft: false
---
# Introduction
## Introduction
This is the first entry in a new set of summations. Previously we looked at "Secure Coding in C and C++", this current set of summations are going to go over "Code Complete 2" by Steve McConnell. These summations will have a focus on security.
"Code Complete" uses a set of "metaphors" for describing software development styles. We will look at Penmanship, Farming, and Oyster Farming. We'll look at these and how they could affect the security of the final product.
# Introduction Part II: Summarizing the Metaphors
## Introduction Part II: Summarizing the Metaphors
## Penmanship: Writing Code
### Penmanship: Writing Code
Writing a program is like writing a letter. Just sit down, start programming, start to finish.
## Farming: Growing Code
### Farming: Growing Code
Similar to planting one seed at a time, design and develop one piece at a time. Test each piece before moving on.
## Oyster Farming: Code Accretion
### Oyster Farming: Code Accretion
Code accretion is the slow growth of the whole system. Start with designing the system as a whole, create empty objects and methods, then start adding content and tests.
# Security Implications
## Security Implications
## Penmanship: Writing Code
### Penmanship: Writing Code
Writing code start to finish, like writing a letter, works great for a small program that only needs to complete one task. However, this quickly doesn't make sense as the complexity of the program grows. When the complexity grows, it's easy to miss pieces or have the need for re-writes as more complexity is added. Testing is also not built into this method.
Two security concerns are immediately apparent with this method: testing, intra code communication. With no testing built in, it becomes much more difficult to find problems early. Without finding the problems early, they can become hidden by the complexity and harder to find. Also without an overall design, it becomes much harder to have the different pieces of code communicate with each other. Without the coherent design, more bugs could be introduced through the interaction. Parameter limits and return values could get messed up and any changes can cause cascading issues.
## Farming: Growing Code
### Farming: Growing Code
Here we have testing built in, as each piece is not finished until testing is complete. This will alleviate bugs (or as much as possible) in individual parts of the code. However the issue of overall code complexity is still a problem. Interaction between pieces isn't thoroughly tested and any changes can cause cascading issues.
### Oyster Farming: Code Accretion
#### Oyster Farming: Code Accretion
This metaphor provides the best option for creating secure large scale projects. Starting with the overall design can quickly show how each piece needs to interact, come up with (mostly) stable interfaces into each part, and reveal overall problems that might occur. By writing a skeleton of the code, any piece can be worked on, since it isn't fully reliant on the other parts being complete.
# Conclusion
## Conclusion
These metaphors can help show the way code being built can affect the final product. They can each have their own security implications, by attempting to stop bugs before they form.

View File

@@ -4,41 +4,41 @@ date: 2023-12-20
draft: false
---
# Introduction
## Introduction
Prerequisites are incredibly important to any development project and was the [OWASP Top 10, Number 4, Insecure Design](https://owasp.org/Top10/A04_2021-Insecure_Design/). For the purpose of this document we will talk about it in context of *security implications*.
# Planning Comes First
## Planning Comes First
As the saying goes a failure to plan is a plan to fail. Without a solid foundation, similar to building a house, the entire program can fall. With no plan in place, code can end up being added in a half-hazard way, causing code paths to become unknown or unintentionally created. These code paths then become more difficult to maintain.
Another saying that can apply is an ounce of preparation is worth a pound of cure. This can become even more poignant in software development, from a security perspective. From just a development perspective, this could mean saving time later trying to rewrite code that no longer fits or trying to shoehorn in code that doesn't quite fit. For security this can become a major issue if large security flaws are found in the software.
# How Prereqs are Used
## How Prereqs are Used
## Where is this Software Used
### Where is this Software Used
The first thing to determine is where is this software going to be used: personal-local, personal-network, business, mission-critical, embedded. These are all going to involve vastly different life cycles and security footprint.
### Personal Project
#### Personal Project
A personal software project is going to have a much smaller security footprint, particularly if it's only going to run locally. Personal projects can also be easily and quickly updated if flaws are found. While an initial plan is still needed, this type of project can be built a little more free form.
### Business
#### Business
For business code, software will still be updated, but on longer cycles. As such a good plan and set of requirements is important since any changes will take time to deploy. This is true for security problems as well. Businesses tend to be risk adverse and will not want to update on a regular basis. Be sure to try and set the requirements and plan ahead of time.
### Mission Critical
#### Mission Critical
Users will be hard pressed to change this if it's working. If it isn't broken, don't touch. As such it will get updated very infrequently. Here requirements and design are critical to make sure everything is developed properly and securely.
### Embedded
#### Embedded
Never will it get updated ... that should be the assumption. These need to have extremely tight requirements set ahead of time with an incredibly secure design. Having a tight plan here would also include using tried and tested external dependencies. The plan should include very granular unit testing as well as integration testing.
## Iterative vs Sequential
### Iterative vs Sequential
### Iterative (as-you-go)
#### Iterative (as-you-go)
An initial design is definitely still needed, but the design should be flexible. New requirements will be added all the time.
@@ -46,7 +46,7 @@ This can be used for personal projects and some business applications. It's best
A good business example of this is a security application that monitors a system of security problems. An initial design can be made for communication and how detection will be done, but what will be detected will change overtime.
### Sequential
#### Sequential
All requirements and design are complete before coding is done. This is a must for mission critical and embedded software projects. This approach is needed when changing things in the future is difficult or will cost a lot.
@@ -54,30 +54,30 @@ For this method and these types of projects the requirements need to be stable,
Additionally, in my opinion, these should be either small or a series of small projects. Smaller projects tend to be easier to audit and see code paths. This will reduce the possibility for security issues.
# Defining the Problem
## Defining the Problem
The first and most important is defining what is trying to be solved. Everything will stem from this, so create as narrow/specific problem as possible.
The problem should also be easily understandable. Not only should everyone on the development team understand the problem statement, but the customer and users should also understand it. Without a clear problem, requirements will be difficult to define and may include things outside the scope of the project.
# Defining the Requirements
## Defining the Requirements
Having official requirements helps the user drive development rather than the programmer. This way the project will actually be useful to those using it.
## Evaluate the Requirements
### Evaluate the Requirements
STOP and make sure all requirements make sense and are specific enough. If anything doesn't make sense or is too vague, bring those concerns to the customer and have them get more specific. A good design can't happen without good requirements.
## Prep for Change
### Prep for Change
Using a strong problem statement, create an initial design that can handle some changes. The only thing that stays the same is that everything changes. Having a flexible design can help with those changes. An example could be having different parts of the code run independently, but have a strong stable design for communication between the parts.
## Change Control
### Change Control
Customers are going to want more, have a procedure in place to handle those requests. By having a formal request process, this can help filter vague or bad requests before hitting the developers.
With a strong problem statement, there can be a way to push back against requests as well. Any request that goes outside the scope of the problem statement does not get put into the current project.
# Conclusion
## Conclusion
The initial problem statement and set of requirements can make or break a project's security. By having a narrowly defined problem and set of requirements, it's much easier to design a system. With a robust design, security can be included from the beginning.

View File

@@ -4,29 +4,29 @@ date: 2023-12-26
draft: false
---
# Introduction
## Introduction
Prerequisites are incredibly important to any development project and was the [OWASP Top 10, Number 4, Insecure Design](https://owasp.org/Top10/A04_2021-Insecure_Design/). For the purpose of this document we will talk about it in context of *security implications*.
This ended up being too big of a topic for just one post, so here is part 2. In {{< ref "code-complete-summations-pre-requisets-part-1.md" >}}, we looked at why pre-reqs are needed in general and how they apply to types of projects. In part 2 we'll look at architectural pre-reqs.
# Planning Comes First
## Planning Comes First
As the saying goes a failure to plan is a plan to fail. Without a solid foundation, similar to building a house, the entire program can fall. With no plan in place, code can end up being added in a half-hazard way, causing code paths to become unknown or unintentionally created. These code paths then become more difficult to maintain.
Another saying that can apply is an ounce of preparation is worth a pound of cure. This can become even more poignant in software development, from a security perspective. From just a development perspective, this could mean saving time later trying to rewrite code that no longer fits or trying to shoehorn in code that doesn't quite fit. For security this can become a major issue if large security flaws are found in the software.
# Why Architectural Prerequisites
## Why Architectural Prerequisites
General pre-reqs need to be generic enough that the customer and users will be able to understand what is required. Architectural requirements are for the developers themselves. This is hugely important as it keeps the code base consistent and easy to maintain. From a security perspective this is vital as the less chaos in the code the less mistakes there will be. Also, by making it more maintainable, if bugs or security flaws are found, it will require less time and effort to fix.
By having a solid architectural foundation, this will allow the developers to break up work where appropriate. See [Data Design](#data-design) for more detail.
# Architectural Features
## Architectural Features
Architectural designs can be broken down into multiple pieces, each with their own considerations.
## Communication
### Communication
How will this software communicate between components in the project and external to the project, both protocol and data structure.
@@ -34,11 +34,11 @@ Protocols are vital here as they will help determine how secure communications b
The data structure is critical to have coordinated between each piece involved. See [section](#data-design) for more detail.
## Major Classes
### Major Classes
Creating a skeleton of all the major classes will go a long way to ensuring good design, which in turn helps with keeping the project secure. By creating the skeleton it becomes more obvious what is missing and where each component will live. By having an experienced engineer design and build the skeleton, it also becomes easier to have junior devs take over the actual implementation.
## Data Design
### Data Design
The way the data is designed can have a major impact on security. There are different types of data to consider when designing a secure system. Any data that is considered sensitive, such as PII, should be encrypted both at rest and in transit. Any data that should not be able to be altered by a user should probably also be encrypted both at rest and in transit.
@@ -48,16 +48,16 @@ All data should be encrypted in transit to reduce the possibility of man-in-the-
The data design needs to also be agreed upon by all parties using the data. If the design and restrictions are being used differently on both ends, this will cause read/processing errors if both expect different things.
## User Interface
### User Interface
The UI needs to be considered a separate component that uses an API to communicate with the backend. This moduler approach will allow flexibility, as well as naturally leads to security in depth. If the UI is treated separate, then any user input should be sanitized at the UI side. Since it's considered a separate component, then all user input should be sanitized on the backend as well.
## Error Processing and Logging
### Error Processing and Logging
Here is a big one. Error handling needs to be designed from the beginning. If errors are handled through exceptions, integer returns, parameters passed by reference, etc, this will lead to confusion in the code base and errors will be missed. There should be one design for errors used universally across the project, so all developers know how to handle the errors.
As for logging, this needs to be taken into account for two big reasons. The first is to have the appropriate amount to diagnose and fix errors. The second is how much data and what data will be displayed, as this could lead to unintentionally leaking secure information.
# Conclusion
## Conclusion
Having a good design from the beginning can help prevent problems before they even arise. Then if bugs and security issues are found, having a good architecture will help to locate those issues faster.

View File

@@ -4,39 +4,39 @@ date: 2024-03-05
draft: false
---
# Introduction
## Introduction
Prerequisites are incredibly important to any development project and was the [OWASP Top 10, Number 4, Insecure Design](https://owasp.org/Top10/A04_2021-Insecure_Design/). For the purpose of this document we will talk about it in context of *security implications*.
This ended up being too big of a topic for just two posts, so here is part 3. In {{< ref "code-complete-summations-pre-requisets-part-1.md" >}}, we looked at why pre-reqs are needed in general and how they apply to types of projects. In {{< ref "code-complete-summations-pre-requisets-part-2.md" >}} we looked at how various pre-reqs for architecture helps with security. In part 3 we'll look at resource and error management pre-reqs.
# Planning Comes First
## Planning Comes First
As the saying goes a failure to plan is a plan to fail. Without a solid foundation, similar to building a house, the entire program can fall. With no plan in place, code can end up being added in a half-hazard way, causing code paths to become unknown or unintentionally created. These code paths then become more difficult to maintain.
Another saying that can apply is an ounce of preparation is worth a pound of cure. This can become even more poignant in software development, from a security perspective. From just a development perspective, this could mean saving time later trying to rewrite code that no longer fits or trying to shoehorn in code that doesn't quite fit. For security this can become a major issue if large security flaws are found in the software.
# Resource Management
## Resource Management
Resource management includes not just how much memory or processing power is used, but also data base connections, threading, and file handles. These are vital to security as they are dealing with how data is accessed and processed.
By planning ahead with resource management a lot of the issues can be avoided from the beginning. If there is a failure to take this into account, the code will need to be retrofitted to fix any security issues. This could lead to inconsistent handling of resources or old code being left behind. Planning ahead is the best way to combat these security problems.
## Databases
### Databases
Databases require a thoughtful setup. Using a secure password is only the start, encrypting the database (particularly file based DB, such as SQLite) should be considered. If this database is remote, having a secure connection is necessary.
In addition, how the data is accessed needs to be considered as well. This includes sanitizing using input, when and how to update data, and read and write sequences. Messing up these could cause leaks or corruption of data.
## Threading
### Threading
Threading and data access is relevant for corrupted data. The data could be corrupted if read/write sequences are off. If two writes occur at the same time or data is read as a write is happening, the data could be corrupted. Shared variables across threads could also cause security issues sucha as use-after-free, double free, or access data outside of scope.
## File Handles
### File Handles
In a previous post {{ < ref "secure-coding-in-c-summations-file-io.md" > }} we discussed in detail why file access is a security issue. By designing out from the beginning how files are going to be accessed will greatly reduce those security issues.
# Error Processing
## Error Processing
Being able to handle errors properly is critical for security. There are a few questions that need to be answered that will have impact on how the software is architected:
@@ -47,6 +47,6 @@ Being able to handle errors properly is critical for security. There are a few q
These questions shouldn't be taken lightly. Each one should be considered since they could cause security issues. In addition just keeping things consistent is good for security since all developers will know what to expect from another developers code.
# Conclusion
## Conclusion
Having a good design from the beginning can help prevent problems before they even arise. Then if bugs and security issues are found, having a good architecture will help to locate those issues faster.

View File

@@ -4,11 +4,11 @@ date: 2024-02-23
draft: false
---
# Introduction
## Introduction
In this summation of "Code Complete 2" by Steve McConnell we will focus on variable naming and usage and how it ralates to security. Variable naming is an essential aspect of software development, and it plays a critical role in ensuring software security.
# Importance of Variable Naming
## Importance of Variable Naming
Variable naming is important for software security because it helps to prevent common programming errors that can lead to security vulnerabilities. For example, if a variable is named incorrectly, it can be difficult to understand its purpose, which can lead to confusion and errors in the code. This can make it easier for attackers to exploit vulnerabilities in the software.
@@ -16,7 +16,7 @@ In addition, poorly named variables can make it difficult to identify and fix se
On the other hand, well-named variables can help to prevent security vulnerabilities by making it clear what data the variable contains and how it should be used. For example, a variable named "sanitizedUserInput" clearly indicates that the data has been sanitized and is safe to use in a SQL query or HTML page.
# How to Name Variables
## How to Name Variables
There are several things to keep in mind when naming variables:
@@ -26,26 +26,26 @@ There are several things to keep in mind when naming variables:
4. Use consistent naming conventions: Use consistent naming conventions throughout your code. This will make it easier to understand and maintain your code, and will help to prevent errors and security vulnerabilities.
5. Avoid using sensitive data in variable names: Avoid using sensitive data, such as user passwords or credit card numbers, in variable names. This will help to protect the data from being accidentally exposed or leaked.
# Variable Usage
## Variable Usage
Another important aspect of variables is generally how they are used.
## Position
### Position
Similar to how good variable names helps with the readability and maintainability of the codebase, so does variable position. By declaring a variable close to its usage, keeps the code organized. It also helps with making sure variables are freed when leaving scope.
By keeping the position close to use, we can also keep the time to "live" short as well. The less code a particular variable spans, the less likely it is to be miss-used.
## Initialization
### Initialization
All variables should be initialized as they are declared as well. By doing so, we avoid the situation of attempting to use an empty variable. This is particularly important for pointers as it can cause memory leaks or out of bounds writes.
## One Purpose
### One Purpose
When declaring and using a variable, make sure it's only used for one specific purpose. If the reason for a variable's existence changes part way through, it can make the code base very confusing and hard to maintain. It can also lead to mistaken identity, which will cause errors.
This is especially problematic for non-strongly typed languages, as not only the purpose, but type, of the variable could change.
# Conclusion
## Conclusion
In conclusion, variable naming and usage are important aspect of software security. By using descriptive and meaningful variable names, you can help to prevent common programming errors and security vulnerabilities. by keeping variables close, short, and single purposed, you increase the maintainability of the codebase and reduce the possibility of misuse.

View File

@@ -4,17 +4,17 @@ date: 2023-03-30
draft: false
---
# Introduction
## Introduction
Currently one of my projects uses "pinned" certs to securely communicate back to a REST service. These are pinned to allow for truly secure authentication of the server, eliminating a rogue certificate authority (CA) to issue a fake cert and allow for man-in-the-middle (MITM) attacks. This is a huge hassle as the server and client need to stay in sync. This involves cutting a new release just to update certs and trying to get them deployed in the expiration/reissue window. [Enrollment over Secure Transport](https://www.rfc-editor.org/rfc/rfc7030.html) (EST) should provide a better way to issue certs from the server so the client just has to request the new ones and download them.
# What is EST
## What is EST
EST allows a client to authenticate to the EST server, which then delivers a client cert. This could be unique to the client or generic for all clients. Issued certificates can then be used to re-authenticate to the EST to get the updated cert. By having this re-authentication method, a client can automatically get the most up-to-date cert in a secure way. By not having it compiled into the binary (i.e. pinning) a new release is not needed to simply update the cert.
To do this, the client authenticates to the EST server, either via public/private key pair or username/password, and the client authenticates the server, either through the same public/private key challenge or external CA. Once authenticated, the EST server will issue the correct cert. All communication is over a TLS connection.
# Possible Setup
## Possible Setup
First, no to username/password. With username/password authentication, the client will be reliant on an external CA to authorize the server, which is what "pinning" was supposed remove. So, if username/password is used, there is no real need for an EST server and the client can just connect directly to the server (for our use case).
@@ -33,7 +33,7 @@ Cons of a separate key
Being able to easily revoke and re-issue a private key is the deciding factor for me. This is the true solve to the problem of pinning. Building in the private key helps with the pinning issue as it doesn't need to be updated as frequently, but it really just delays the issue. Yes it's more work for the client to get everything setup, but a little inconvenience shouldn't get in the way of good security.
# Final Proposal
## Final Proposal
The final setup could look something like this:
@@ -50,7 +50,7 @@ Once the software is installed it would:
1. Client uses TLS cert to connect and authenticate to backend server
1. When TLS cert expires, it can be used to re-auth with the EST and download the next TLS cert
# Conclusion
## Conclusion
Using this method of authentication with a pub/priv key pair to an EST, then using the issued TLS cert for authentication is the best way to remove the need for pinned certificates and username/passwords. The private key is the primary way the client authenticates, since it uses that key pair to get the TLS cert. Using the TLS cert for authentication makes it so a client doesn't need to continuously update passwords. By having the private key separate from the binary, and the TLS cert for authentication, it becomes relatively simple to re-issue creds when a system is compromised.

View File

@@ -4,15 +4,15 @@ date: 2019-09-26
draft: false
---
# Introduction
## Introduction
In this post we will explore a brief overview of the fast-flux (FF) technique used by botnets. [Here is my full paper](/security/FastFluxPaper.pdf) with more detail regarding what a botnet is and how FF works.
# Botnet Overview
## Botnet Overview
Botnets are a major threat to all those connected to the Internet. They are used for distributing spam, hosts for malicious code, sending phishing attacks, and performing a variety of attacks, including denial of service (DOS). Many botnets will use DNS names to control or connect to the botnet. This would seemingly be easy to shutdown, just block the particular domain, however through a technique called fast-flux (FF), botnets are able to evade detection and mitigation.
# Fast Flux Overview
## Fast Flux Overview
Fast-flux is the process of quickly changing the domain name or IP addresses associated with a domain in order to hide the bot-master, or command and control (CC), for the botnet. These fast changes are accomplished through two primary technologies, dynamic DNS (DynDNS) and round robin.
@@ -24,11 +24,11 @@ Round robin was a technique developed for load balancing. Sites that see a large
In addition to DynDNS and round-robin, some botnets will be double-fluxed. In this technique a botnet will setup its own name servers and rotate through them as well. More detail is in the paper.
# Detection/Mitigation
## Detection/Mitigation
There are two primary ways of detecting and mitigating fast-fluxing botnets that need to be used in conjunction. The first is to look at the time to live (TTL) for DNS entries to be cashed. Fast-fluxing botnets tend to use very short TTL values compared to legitimate domains. The second is keeping a "FF Activity Index" or how often name-address relationships change. The "FF Activity Index" will hold both how often the IP address for a given domain changes and how often domains change for a single IP address. Even looking at these two indicators still yields a number of false positives. More details in the paper.
# Conclusion
## Conclusion
Botnets are getting more sophisticated and more research is needed to detect these techniques. The best way to block these connections is to attempt to stop the CC directly. Most hide behind proxies and many use FF techniques to hide those. FF is an arms race between detection and ever more sophisticated ways of hiding activities.

View File

@@ -4,27 +4,27 @@ date: 2024-03-22
draft: false
---
# Introduction
## Introduction
Pseudo-random number generators (PRNGs) play a crucial role in modern cryptography and information security. These algorithms generate seemingly random sequences of numbers, which are essential for tasks like encryption, secure key generation, and digital signatures. PRNGs in the past have had many issues with predictability. Looking at the current and future research requires a look at how predictable the numbers really are.
# External Techniques
## External Techniques
Several techniques have arisen to generate random numbers, both on local machines and using real world chaos. There are a few ways to integrate physical phenomenon in the real world to generate random numbers.
## Lava-lamps
### Lava-lamps
(Lavarland)[https://en.wikipedia.org/wiki/Lavarand] uses a video of a wall of lavalamps to generate random numbers. It does show by taking a high definition screenshot of the video feed. It then hashes that image to generate a seed for a PRNG. The more random the seeds the more random the number that will be generated. Since the lavalamps, particularly accumulated over all lamps, is unpredictable, the seed is also unpredictable.
## Radioactive Decay
### Radioactive Decay
Using Geiger Meters to detect background decay of radioactive material allows the generation of random seeds as well. As far as we currently know, radioactive decay has no distinct pattern and thus, unpredictable. Using this to generate seeds for PRNGs will generate random numbers.
## Background Sound
### Background Sound
Another physical phenomenon that is difficult to predict is background noise. It's almost impossible to predict not just what will be making sound at any given moment, but also the direction, intensity, and frequency of that sound. By hashing background noise a random seed can be generated, making it almost impossible to predict the output of a PRNG.
# Internal Techniques
## Internal Techniques
Not all personal computers have access to these physical phenomenon. If they don't have access to a camera, microphone, network connection, or Geiger counters, there are sensors that most computers have that can be used. Most motherboards and graphics cards have both power meters and temperature sensors. By taking as accurate a measurement as possible on temperature, electrical pull, fan speeds, and time can produce fairly unpredictable values. Some of these values can be correlated (i.e. higher electrical pull, will lead to higher temperatures, which leads to higher fan speed), but should produce numbers that are unpredictable enough. Using all these values together is a good way to generate a seed to use.
@@ -32,7 +32,7 @@ Another way is to track user movements. By having a user move around the mouse p
These internal techniques do not require permissions to external resources.
# Conclusion
## Conclusion
Most PRNGs used simple seeds in the past, usually just time of run. Current new techniques create a more random number by using real world conditions. By using these conditions to generate seeds, it provides a better pseudo random number.

View File

@@ -4,17 +4,17 @@ date: 2020-04-17
draft: false
---
# Introduction
## Introduction
After reading through "Silence on the Wire" by Michal Zalewski for the 8th time, I decided I wanted to try the random algorithm analysis he did in Chapter 10. He looked at the relationship between sequential numbers by graphing them in a 3D scatter plot. My idea was to see if any of the algorithms had been updated to make them more secure.
There was a problem with that however. I only own one computer and it's too low power to run VMs. So I was stuck with the python algorithm, shuf, urandom, and two online random number generators. This was a big limitation and I hope to update this whenever I get a new computer.
# The Importance
## The Importance
Random algorithms cannot be predictable for security reasons. All encryption algorithms use random digits to generate keys. If the keys are predictable, than encryption can be broken. In "Silence on the Wire" it showed some random algorithms having limited range or predictable patterns to reduce the search space. Luckily the new algorithms seem to be doing better.
# The Math
## The Math
Using the math in "Silence on the Wire" to create the graphs allows me to compare more directly to Mr Zalewski's. Of course this ended up not really mattering, since I was so limited. For a better explanation see the book Chapter 10, but here is a quick run down. Using data samples S0, S1, S2, being a randomly generated sequence, then calculate the deltas and graph those.
@@ -33,13 +33,13 @@ Then we graph the deltas in a 3D scatter plot using the following points:
.
.
# The Samples
## The Samples
The data came from the following locations; JS Math, Python's numby package, random.org, Bash shuf, and urandom. Here are the graphs that were produced ... don't get excited, they are all basically the same:
Unfortunetally my blog server crashed, so I've lost the images for now, I'll add them in later. The long and short is they all look basically the same.
# Conclusion
## Conclusion
Why are these all basically the same ... probably because they all use the same exact algorithm. I was hoping Python had it's own built in PRNG, but it appears to use whatever the host uses. The shuf command and urandom make sense that they are the same. Shuf is kinda just a wrapper around urandom to give the user more control.

View File

@@ -4,7 +4,7 @@ date: 2022-12-06
draft: false
---
# INTRODUCTION
## INTRODUCTION
RSA is a public key cryptosystem, which was named after the creators of
the algorithm: Rivest, Shamir, and Adleman [@STALLINGS]. It is widely
@@ -55,7 +55,7 @@ decrypting messages. However the same instruction set architecture we
propose in this paper can be used for signing and verifying messages
with RSA.
# CHARACTERISTICS OF RSA
## CHARACTERISTICS OF RSA
There are three areas that the RSA can be optimized: finding the
encryption and decryption exponent, prime number generation, and
@@ -64,7 +64,7 @@ finding the encryption and decrypting exponent, prime number generation,
and encrypting and decrypting the message is usually done, without
specialized instructions.
## ENCRYPTION AND DECRYPTION EXPONENT
### ENCRYPTION AND DECRYPTION EXPONENT
The current approach to verifying that the encryption exponent is
coprime to the `φ(n)` is by using the Euclidean Algorithm. To find
@@ -150,7 +150,7 @@ we will use two instructions that will combine some of the instructions
used in the Extended Euclidean Algorithm to reduce the number of stalls
within the loop.
## PRIME NUMBER GENERATION
### PRIME NUMBER GENERATION
The common approach to generating large primes in making encryption and
decryption keys is to randomly select integers and test them for
@@ -199,7 +199,7 @@ shown below). The instructions to calculate `x^{2} (mod n)` would be
executed `s` times. These two factors indicate a heavy reliance on the
ability of a system to calculate exponentiation.
## ENCRYPTION AND DECRYPTION
### ENCRYPTION AND DECRYPTION
One aspect of RSA to improve upon is performing large exponentiation.
Currently the implementation of exponentiation is performed by the
@@ -222,13 +222,13 @@ to be available. We can lessen the number of stalls using a technique
known as exponentiation by squaring. This technique is explained further
in the design section.
# DESIGN
## DESIGN
In this section, we will describe specialized instructions that will be
used for prime number generation, computing the encryption and
decryption exponent, and encrypting a decrypting a message.
## ENCRYPTION AND DECRYPTION EXPONENT
### ENCRYPTION AND DECRYPTION EXPONENT
The issue with implementing the Euclidean Algorithm the traditional way
is a divide, multiply, and subtract instructions are needed for each
@@ -320,7 +320,7 @@ an analysis to the speedup given to the Extended Euclidean Algorithm by
using the modular instruction and the multiply-subtract instruction in
the justification and analysis section.
## PRIME NUMBER GENERATION, ENCRYPTION, AND DECRYPTION
### PRIME NUMBER GENERATION, ENCRYPTION, AND DECRYPTION
One issue already discussed in the previous section is that of stalls
during large exponents. The way exponentiation is handled causes many
@@ -375,12 +375,12 @@ depending on the digit, may use the second accumulator. It will then run
through one more multiplier to multiply the squares by the 1's
multiplier.
# JUSTIFICATION AND ANALYSIS
## JUSTIFICATION AND ANALYSIS
In this section, we will describe how our specialized instructions will
improve the performance of the RSA encryption.
## ENCRYPTION AND DECRYPTION EXPONENT
### ENCRYPTION AND DECRYPTION EXPONENT
Using the modular instruction in the Euclidean Algorithm, we can reduce
the number of stalls needed. Instead of needing to stall for the result
@@ -441,7 +441,7 @@ speedup of 1.23. Also, an advantage to using the modular and
multiply-subtract instructions is we reduce the number of temporary
registers needed from five to three.
## PRIME NUMBER GENERATION, ENCRYPTION, AND DECRYPTION
### PRIME NUMBER GENERATION, ENCRYPTION, AND DECRYPTION
Using the pow command we can reduce stalls of large exponents in half.
Since the algorithm breaks the exponent into a binary representation of
@@ -472,7 +472,7 @@ the following equations to determin the overall speedup:
Speedup = 1.25/1.125
Speedup = 1.11
# CONCLUSIONS
## CONCLUSIONS
In analyzing the typical algorithms used as a part of RSA, we have
identified two primary bottlenecks in both encryption and decryption:
@@ -502,7 +502,7 @@ system with the sole task of encrypting and decrypting messages using
RSA, we can create hardware that allows the specialized instructions to
have a latency comparable to the traditional instructions.
# Bibliography
## Bibliography
3 Beauchemin, Pierre, Brassard, Crepeau, Claude, Goutier, Claude, and
Pomerance, Carl The Generation of Random Numbers That Are Probably Prime

View File

@@ -4,27 +4,27 @@ date: 2023-01-27
draft: false
---
# Introduction
## Introduction
Continuing summarizing the themes in "Secure Coding in C and C++" by Robert C. Seacord, we will discuss concurrency. When code runs at the same time needing access to the same resources lots of issues can occur. These can be from the annoying of getting the incorrect data, halting deadlocks, to vulnerabilities.
The tl;dr; use `mutex`'s. There are a lot of methods for controlling concurrency, but many use `mutex`'s in the background anyway. A `mutex` is the closest thing to guaranteed sequential access, without risking deadlocks.
## Importance
### Importance
To quote Robert C. Seacord, "There is increasing evidence that the era of steadily improving single CPU performance is over. ... Consequently, single-threaded applications performance has largely stalled as additional cores provide little to no advantage for such applications"
In other words, the only real way to improve performance is through multi-threaded/multi-process applications, thus being able to handle concurrence is very important.
# The Big Issue
## The Big Issue
Race Conditions! That's the big issue, when two or more threads or processes attempt to access the same memory or files. The issue comes in when; two writes happen concurrently, reads occur before writes, reads occur during writes. This can lead to incorrect values being read, incorrect values being set, or corrupted memory. These types of flaws, and insufficient fixes can cause vulnerabilities in the programs as well.
# How Do We Keep Memory Access Sane
## How Do We Keep Memory Access Sane
So what is the fix. There are several possible ways to keep things in sync, but the number one way that will "always" work is a `mutex`. In fact most of the other "solutions" are just an abstracted `mutex`. We will go briefly over a couple solutions: global variables, `mutex`, and atomic operations.
## Shared/Global Variables
### Shared/Global Variables
A simple solution, that is **NOT** robust is simply having a shared "lock" variable. A variable, we'll call `int lock`, which is a `1` when locked and `0` when unlocked, is accessible between threads. When a thread wants to access a memory location it simply checks that the variable is in the unlocked state, `0`, locks it by setting it to `1`, then accessing the memory location. At the end of it's access, it simply sets the variable back to `0` to "unlock" the memory location.
@@ -36,7 +36,7 @@ The third issue is compiler optimization (future blog coming regarding that hot
The third issue *can* be solved through compiler directives, but that still doesn't solve the first two issues.
## `mutex`
### `mutex`
Fundamentally, a `mutex` isn't much different than a shared variable. The `mutex` itself is shared among all threads. The biggest difference is, it doesn't suffer from either of the three issues. The `threading` library handles things properly such that a "check" on the `mutex` and a "lock" happen atomically (meaning that nothing can happen in between). This handles the issue of reading the variable before another thread writes and the compiler trying to optimize things. `mutex`es also handle waiting a little different thus need less CPU to wait.
@@ -44,13 +44,13 @@ The only drawback to the `mutex` is that it can still cause a *deadlock* when no
To solve the possible *deadlock* of not unlocking the `mutex`, `automic` operations were added.
## Atomic Operations
### Atomic Operations
Atomic operations attempt to solve the issue of forgetting to unlock the `mutex`. An atomic operation is a single function call that perform multiple actions on a single shared variable. These operations can be checking and setting (thus making them semi useful as a shared locking variable), swapping values, or writing values.
Atomic operations are very limited in their use case, since there is only so many built in methods. If they work for your use case there really isn't much down side to using them. However since they are limited and use a `mutex` in the background anyway, a `mutex` with proper error handling and releasing is probably the best way to go.
## Other Solutions
### Other Solutions
Lock Guard:
- C++ object that handles a `mutex`, useful for not having to worry about unlocking the `mutex`, only real downside is it's C++ only
@@ -61,21 +61,21 @@ Fences:
Semaphore:
- `mutex` with a counter. Can have good specific use cases, but just uses a `mutex` in the background. Unless needed, just use a `mutex`
# Obvious bias is Obvious
## Obvious bias is Obvious
Just use a `mutex`. Most of the additional solutions either are simply a `mutex` in the background or cause other issues. A `mutex` will just work. Just be sure to properly unlock when done and possibly have timeouts in case another thread gets stuck.
With a `mutex` you have way more control over the code and way more flexibility in how it's used. An arbitrary amount of code can be put in between without having to finagle a use case into a limited number of function calls.
# Keep it Sane
## Keep it Sane
There is one additional tip for concurrency, lock as little code as possible. By having as few operations as possible between a `mutex` lock and unlock it reduces possibilities of timeouts, gridlocks, and crashing. It also helps to reduce the possibility of forgetting to unlock. Do not surround an entire method (or methods) with locks, rather just the read and write operations.
## The Dreaded `GOTO`
### The Dreaded `GOTO`
When it comes to locking, `goto` is your friend. Have an error or exception inside a lock, `goto` the unlock. This also works for clearing memory, have an error `goto` the `free` and memory cleanup. Keep the `goto` sane by only jumping to within the current method.
# Conclusion
## Conclusion
Just use a `mutex`, everything else is either more prone to errors, more limiting, or just uses a `mutex` in the background anyway. Keep things sane by locking as little code as possible. And always make sure to throw locks around accessing common memory space

View File

@@ -4,7 +4,7 @@ date: 2023-06-29
draft: false
---
# Introduction
## Introduction
Continuing summarizing the themes in "Secure Coding in C and C++" by Robert C. Seacord, we will discuss file I/O and how to prevent unauthorized access. File I/O is especially dangerous when a program is running under a privileged context and accesses files that unprivileged users can access. This can lead an attacker to read or even overwrite privileged files.
@@ -12,7 +12,7 @@ The tl;dr; here is, use proper file permissions, verify file paths, and use the
This post assumes basic knowledge of file system permissions and how paths are determined.
# Big Issues
## Big Issues
There are several issues that can arise while attempting to access files on the system:
@@ -22,9 +22,9 @@ There are several issues that can arise while attempting to access files on the
Without properly handling these three primary issues, a process could leak information or provide a path for an attacker to alter system files.
# Unauthorized Path Access
## Unauthorized Path Access
## Manipulated Paths
### Manipulated Paths
Similar to SQL injection, a user can manipulate a path to attempt to access locations they shouldn't otherwise be able to access. The classic example is using the `..` notation to go up a directory level. Using multiple `../../../../` will eventually reach the root of the system, allowing a malicious user to access the entire system.
@@ -36,13 +36,13 @@ There are enough ways to perform directory traversal that it becomes difficult t
By requesting the absolute path, all these tricks are flattened and returns a standard path. Then the program can verify it should be accessing that path.
# Bad File Permissions
## Bad File Permissions
On the surface this one is pretty simple. When creating a file, give it the most restrictive access possible for functionality to continue. By limiting access a malicious actor will have a harder time viewing and manipulating the data. And this should definitely be done, but there are some more subtle ways to keep things secure.
There are other file attributes that need to be considered. By checking and storing things like the inode number, link status, and device id, there is more assurance that this is the correct file and hasn't been replaced.
# Principle of Least Privilege
## Principle of Least Privilege
Keep the program running as an unprivileged user and only request more privileges when needed. This is good advice for any program, but comes in handy for file IO.
@@ -50,7 +50,7 @@ In this case, when accessing a globally accessible file (such as in `/tmp`) the
If the program is running unprivileged when accessing these unprivileged files, it would get a file system error. This will prevent the program from accessing files out of scope.
# Conclusion
## Conclusion
There are a few takeaways from exploring issues with File I/O.

View File

@@ -4,7 +4,7 @@ date: 2022-08-17
draft: false
---
# Introduction
## Introduction
Continuing the series of summarizing the themes in "Secure Coding in C and C++" by Robert C. Seacord, we will discuss freeing pointers. The title of this section is specifically about setting to `NULL` after calling free, but this post will cover a lot more than that. Here we will discuss the problems with forgetting to free, functions whose return value needs to be freed, and freeing without allocating/double free.
@@ -12,11 +12,11 @@ As for the title of this piece, some of the most common problems can be solved s
This is written for an audience that has a broad overview of security concepts. Not much time is spent explaining each concept, and I encourage everyone to read the book.
# Always `free` When Done
## Always `free` When Done
First off lets discuss why `free` is important. Without freeing variables, best case scenario is you end up with leaked memory and worst case could introduce vulnerabilities.
## Memory Leaks
### Memory Leaks
When non-pointer variables are declared, they are restricted to the scope in which they were created. The operating system will clear the memory at the end of the scope. For pointers however, allocated memory is not restricted by scope. So if a pointer is not cleared before the end of scope, that memory will still be held by the process. Depending on how large these allocations are, you could fill memory quite quickly. At best this will lead to crashing your own program (if the OS restricts memory), at worst you will crash the system.
@@ -24,13 +24,13 @@ One of the best ways to handle this is with `goto`'s. Yes, despite the hate for
Also by using the `goto` and anchor, it will prevent another possible vulnerability, use after free. This is also discussed in the next section.
## Vulnerabilities
### Vulnerabilities
The other problem with forgetting to call free is allowing an attacker to gain access to your memory space, which could cause sensitive data to be leaked. By exploiting other vulnerabilities an attacker could gain access to memory that was supposed to be freed. Another problem with forgetting to free is denial of service attacks. An attacker can specifically target the memory leak to overload the system.
Another vulnerability isn't forgetting to free, but forgetting that you did free. Use after free can be a big issue. If an attacker can fill the memory space that was previously freed, when the program uses the pointer again, instead of erroring out the vulnerable program will use the new data. This could result in code execution, depending on how the memory is used.
# Knowing When to `free`
## Knowing When to `free`
When you as the developer call `calloc`, `malloc` or almost anything else with an `alloc` it's pretty clear that those need to be freed. You declared the pointer and created the memory. But there are other situations that are not as clear, when calling functions that allocate memory.
@@ -38,11 +38,11 @@ These functions could either be built in functions like `strdup` or ones which y
This is a perfect situation for a `goto` to an anchor at the end of the method. Then there only needs to be a single `free` preventing use after free and double free. It also then requires only a single return. This will prevent returning before freeing, reducing the risk of memory leaks.
# Knowing When NOT to `free`
## Knowing When NOT to `free`
Knowing when not to free is not as big of an issue as not freeing, but can still cause issues. Double frees, freeing before allocating, and freeing without allocating can cause your program to crash. It doesn't cause additional errors, but can make you vulnerable to denial of service.
# Conclusion
## Conclusion
Freeing is vitally important to keeping your programs safe. All allocations need to be freed and it's best to free at the end of the method the pointer was allocated in. This will help prevent use after frees and forgetting to free. An anchor at the end of a method with `goto` is the best way to accomplish this.

View File

@@ -4,7 +4,7 @@ date: 2022-08-13
draft: false
---
# Introduction
## Introduction
Series on summarizing themes in "Secure Coding in C and C++" by Robert C. Seacord, part 2. Find part 1 here [Always null Terminate (Part 1)]({{<ref "secure-coding-in-c-summations-null-terminate.md">}}). We are currently going through this book in our work book club and there are a lot of good themes that seem to be threaded through the book. These are my notes, thoughts, and summaries on some of what I've read and our book club have discussed.
@@ -12,11 +12,11 @@ This is written for an audience that has a broad overview of security concepts.
The first theme to discuss is always `null` terminating `char *` or `char array` buffers (unless you have a *very* specific reason for not). This is very important to help prevent buffer overflows, reading arbitrary memory, accessing 'inaccessible' memory. This is part 2 where we will discuss string cat and length. For a brief discussion on string copy see [part 1]({{<ref "secure-coding-in-c-summations-null-terminate.md">}}).
# Functions Needing null
## Functions Needing null
One of the important reasons to `null` terminate is there are several very common functions that require `null` termination. Even some that you wouldn't necessarily think of. Without having `null` at the end of the buffer, it creates a situation where things could go wrong.
## String Cat
### String Cat
The next set of functions to look at are concatenating strings. These not only need to be `null` terminated, but they also need to be properly allocated. If they are not a concatenation could overwrite `null` terminators, and the resulting string could cause errors further in the code. Memory allocation will be discussed further in another post. First I'm going to throw a table at you, it gives a summary of string concat functions and how they handle some of the issues. We will discuss further after the table.
@@ -29,7 +29,7 @@ The next set of functions to look at are concatenating strings. These not only n
Lets go over each function:
### strcat
#### strcat
```c
char *strcat(char *dest, char *src)
@@ -46,7 +46,7 @@ Arbitrary memory reads can be a problem since it could mean revealing data meant
Be sure to set the last character to `null` after the `strcat` is completed.
### strncat
#### strncat
```c
strncat(char *dest, char *src, size_t src_len)
@@ -59,7 +59,7 @@ In addition if `src` is not `null` terminated and `src_len` is longer than the l
`strncat` helps the developer watch for these issues but doesn't actually solve them.
### strlcat
#### strlcat
```c
size_t strlcat(char *dst, const char *src, size_t size)
@@ -74,13 +74,13 @@ Point one is great so you don't need to worry as much about pre setting the memo
Point two is good so you can compare `size` to the return value to see if the source was truncated.
## Sensing a Theme
### Sensing a Theme
There are two themes for string concatenating, one is **`null` terminate all character buffers**, the second is proper memory allocation. This will be discussed in a future post.
Every one of these functions require the source and destination to be `null` terminated. If they are not, or if there is a `null` in the middle, it will cause issues!
# Conclusion
## Conclusion
`null` termination is important so that we don't accidentally read or write to arbitrary memory. This concludes the discussion on `null` termination, the next post will cover proper memory allocation.

View File

@@ -4,7 +4,7 @@ date: 2021-09-01
draft: false
---
# Introduction
## Introduction
Welcome to the next series, summarizing themes in "Secure Coding in C and C++" by Robert C. Seacord. We are currently going through this book in our work book club and there are a lot of good themes that seem to be threaded through the book. These are my notes, thoughts, and summaries on some of what I've read and our book club have discussed.
@@ -12,11 +12,11 @@ This is written for an audience that has a broad overview of security concepts.
The first theme to discuss is always `null` terminating `char *` or `char array` buffers (unless you have a *very* specific reason for not). This is very important to help prevent buffer overflows, reading arbitrary memory, accessing 'inaccessible' memory.
# Functions Needing null
## Functions Needing null
One of the important reasons to `null` terminate is there are several very common functions that require `null` termination. Even some that you wouldn't necessarily think of. Without having `null` at the end of the buffer, it creates a situation where things could go wrong.
## String Copy
### String Copy
The first set of functions to look at are copying strings. These not only need to be `null` terminated, but they also need to be properly allocated. Memory allocation will be discussed further in another post. First I'm going to throw a table at you, it gives a summary of string copy functions and how they handle some of the issues. We will discuss further after the table.
@@ -29,7 +29,7 @@ The first set of functions to look at are copying strings. These not only need t
Lets go over each function:
### strcpy
#### strcpy
```c
strcpy(char *dest, char *src)
@@ -44,7 +44,7 @@ This function is super basic and needs a lot of careful programming. The destina
Arbitrary memory reads can be a problem since it could mean revealing data meant to be secret. Depending on where memory is allocated, sensitive data could be revealed to the user.
### strncpy
#### strncpy
```c
strncpy(char *dest, char *src, size_t dest_len)
@@ -58,7 +58,7 @@ The only thing it does is *helps* with buffer overflows. However, if the `dest_l
So `strncpy` can still read arbitrary memory and can still buffer overflow (tho overflows are more difficult).
### strlcpy
#### strlcpy
```c
size_t strlcpy(char *dst, const char *src, size_t size)
@@ -73,7 +73,7 @@ Point one is great so you don't need to worry as much about pre setting the memo
Point two is good so you can compare `size` to the return value to see if the source was truncated.
### strdup
#### strdup
```c
char *strdup(const char *s);
@@ -85,12 +85,12 @@ The only thing to note is that it reads until the `null` terminator.
One important thing to note, the returned value must be `free`'d
## Sensing a Theme
### Sensing a Theme
See the theme yet ... **`null` terminate all character buffers**
Every one of these functions require the source to be `null` terminated. If they are not, or if there is a `null` in the middle, it will cause issues!
# Conclusion
## Conclusion
`null` terminating is very important to prevent accessing or writing to memory locations that should not be accessed. In this post we discussed copying strings. In the next post, we will continue this theme with concatenating strings.

View File

@@ -4,7 +4,7 @@ date: 2019-08-23
draft: false
---
# Introduction
## Introduction
In order to allow flexibility in deployment location and to preserve user privacy we have performed research into stateless classification of network traffic. Because traffic does not always follow the same path through a network, by not worrying about state, we can deploy anywhere. We also use only one direction of traffic as replies could also follow a different path through the network. And by not requiring data within the packet, we can perform analysis on encrypted traffic as well.
@@ -12,7 +12,7 @@ Our research shows that it is possible to determine if traffic is malicious by u
This post serves as an introduction to my master's thesis of the same title. [Full paper for those interested.](/security/StatelessDetectionOfMaliciousTraffic.pdf)
# What Was Done
## What Was Done
The system we developed for this research was an intrusion detection system (IDS), thus does not block any traffic. Most IDS's use specific signatures for traffic. These are inflexible and will only detect the specific attack. If the traffic is modified in any way, it will no longer be detected. Instead of signatures, our system looks at ongoing traffic patterns.
@@ -22,7 +22,7 @@ Our system differs since it uses patterns. Because of this, we cannot say for ce
We used three primary data points to determine if traffic was malicious: destination port, TTL, and packet frequency. To actually perform the classification, we used a software package called WEKA (an open source trainable algorithm) and focused on bayesnet classification.
# Conclusions
## Conclusions
While performing the research, we observed that port only usage provided the least confidence. This isn't surprising, since it will only be useful for network scans. Packet frequency proved to be a better data point for classification. It appeared that benign traffic had a burst at the beginning, with fairly regular communication for the rest of a session. Malicious traffic would have a large burst of traffic followed by nothing, or very little traffic. TTL proved to be one of the best signatures. This is due to the fact that most benign traffic is to a few locations, which are usually physically close. TTL for malicious traffic is usually smaller, either due to further physical locations, as part of the attack, or for the attacker to gain further information about the victim network.