Skip to main content

ec2objects, represents all aws ec2 services as objects, hiding all those horrible api calls.

Project description

ec2 objects (pip3 install ec2objects)

everyone: I wish, for once, to just have a simple object oriented experience with the api.

ec2objects:

Please visit GitHub page for documentation that has navigation that works.

Table of Contents

How to install

Here are your options.

Install from pypi repository

The most popular way is to install the latest package available on pypi.

You can install ec2objects using pip3

pip3 install -U ec2objects

You can uninstall if you like using,

pip3 uninstall ec2objects

Install from the cloned git hub repo

There are a few ways to install this python package from a clone of its github repo. Check out a copy and try the following...

Build a .tar.gz install package

From the root of the repo build a repo, and check the repo.

python3 setup.py sdist
twine check dist/*

Check the newly created dist directory for newly created .tar.gz files. This is your .tar.gz package and you can install using...

pip3 install ./dist/ec2objects-0.0.17.tar.gz

You can still uninstall using the same commands,

pip3 uninstall ec2objects

Install using the setup.py file

!WARNING! Install does not track which files, and where they are placed. So, you need to keep a record of there python3 does this.

This is how... from the github repo root directory.

sudo python3 setup.py install --record files.txt

You can uninstall using by playing back that files.txt file,

sudo xargs rm -rf < files.txt

Local interactive install

Using this method you can modify this packages code and have changes immediately available. Perfect for if you want to tinker with the library, poke around and contribute to the project.

From the cloned repository root directory.

pip3 install -e ./

You can uninstall using the usual command,

pip3 uninstall ec2objects

⬆ back to top

Configurations

BOTO 3

ec2objects uses boto3 to interact with the amazon api. Decided to go with boto3 as it gives nice json responses instead of the xml mess direct interaction with the api gives.

You can find further token requirements and details at the above link.

Here is are the quick start requirements for ec2objects.

Token: Required

Set the AWS_ACCESS_KEY_ID, and the AWS_SECRET_ACCESS_KEY environment variables with your amazon aws access credentials.

export AWS_ACCESS_KEY_ID='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

export AWS_SECRET_ACCESS_KEY='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

Default Region: Required

Please set the AWS_DEFAULT_REGION environment variable to a region you know you have enabled for your credentials.

For example:

export AWS_DEFAULT_REGION='us-east-1'

For relevant commands you can specify a completely different region if you need to act on a different region.

⬆ back to top

Regions

An AWS Region is a collection of AWS resources in a geographic area.

Each AWS Region is isolated and independent of the other Regions.

Regions provide fault tolerance, stability, and resilience, and can also reduce latency. They enable you to create redundant resources that remain available and unaffected by a Regional outage.

Import the ec2object Region and RegionManger to interact with Regions.

from ec2objects import Region, RegionManager

Region Manager

Create a region manager.

region_manager = RegionManager()

Retrieve All Regions

Retrieve a list of region objects.

list_of_all_region_objects = region_manager.retrieve_all_regions()

Retrieve Regions Enabled For Your Account

Retrieve a list of region objects enabled for your account.

list_of_enabled_region_objects = region_manager.retrieve_regions_enabled_for_my_account()

Region Object

Region objects contains an attributes data class with the standard ec2 region attributes.

Region objects also contain a list of availability zones data classes for that region.

class Region:
    def __init__(self):
        self.attributes = RegionAttributes()
        self.availabilityzones = []
        ...
@dataclass
class RegionAttributes:
    RegionName: str = None
    Endpoint: str = None
    OptInStatus: str = None
@dataclass
class AvailabilityZones:
    State: str = None
    OptInStatus: str = None
    Messages: list = field(default_factory=list)
    RegionName: str = None
    ZoneName: str = None
    ZoneId: str = None
    GroupName: str = None
    NetworkBorderGroup: str = None
    ZoneType: str = None
    ParentZoneName: str = None
    ParentZoneId: str = None

⬆ back to top

Images

An Amazon Machine Image (AMI User Guide) provides the information required to launch an instance.

AMI BOTO3 API GUIDE details at this link.

An AMI includes the following:

  • One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance-store-backed AMIs, a template for the root volume of the instance (for example, an operating system, an application server, and applications).

  • Launch permissions that control which AWS accounts can use the AMI to launch instances.

  • A block device mapping that specifies the volumes to attach to the instance when it's launched.

Import the ec2object Image and ImageManger to interact with Regions.

from ec2objects import Image, ImageManager

Image Manager

Create an image manager.

image_manager = ImageManager()

Retrieve Image By ImageId

Retrieve an image object by ImageId.

image_object = image_manager.retrieve_image("ami-fd534b97")

Retrieve Ubuntu Images

Okay, so the AWS documentation is a bit difficult. Going to break it down and try to extract and common sense.. thanks aws.

The following methods retrieve Ubuntu images that have...

Architechture:'x86_64' or 'arm64' ImageType:'machine'

hvm-ssd: Important What does this mean? The instance created from this image(an hvm-ssd) will have small root storage, /dev/sda1, to hold the OS. /dev/sda1 is ephemeral storage is the volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. Data will be available on restart but will be destroyed on instance termination.

For longer term data storage you do need to attach an existing Elastic Block Storage (EBS) device for or create and attach new EBS and attach that.

...and HVM?

HVM (known as Hardware Virtual Machine) is the type of instance that mimics bare-metal server setup which provides better hardware isolation. With this instance type, the OS can run directly on top of the Virtual Machine without additional configuration making it to look like it is running on a real physical server.

Retrieve All x86_64 Ubuntu hvm-ssd machine images

Retrieve a list of image objects.

Argument 'name' is optional and can be excluded entirely. You can request individual Ubuntu releases using the first string of the release code names found at https://wiki.ubuntu.com/Releases e.g. name="focal", name="bionic", name="trusty"

list_of_image_objects =image_manager.retrieve_all_ubuntu_x86_64_machine_images_hvm_ssd(self, name="")

Retrieve All arm64 Ubuntu hvm-ssd machine images

Retrieve a list of image objects.

Argument 'name' is optional and can be excluded entirely. You can request individual Ubuntu releases using the first string of the release code names found at https://wiki.ubuntu.com/Releases e.g. name="focal", name="bionic", name="trusty"

list_of_image_objects =image_manager.retrieve_all_ubuntu_arm64_machine_images_hvm_ssd(self, name="")

⬆ back to top

Image Object

Image objects contains an attributes data class with the standard ec2 image attributes, descriptions available at this link.

Each object also contains a list of ebsblockdevices, and a list of virtuablockdevices - empty if none exists.

ebsblockdevices holds a list of BlockDeviceMappingEBS if they exist.

virtualblockdevices holds a list of BlockDeviceMappingVirtual if they exist.

class Image:
    def __init__(self):
        self.attributes = ImageAttributes()
        self.ebsblockdevices = []
        self.virtualblockdevices = []
@dataclass
class ImageAttributes:
    Architecture: str = None
    CreationDate: str = None
    ImageId: str = None
    ImageLocation: str = None
    ImageType: str = None
    Public: bool = None
    KernelId: str = None
    OwnerId: str = None
    Platform: str = None
    PlatformDetails: str = None
    UsageOperation: str = None
    RamdiskId: str = None
    State: str = None
    Description: str = None
    EnaSupport: bool = None
    Hypervisor: str = None
    ImageOwnerAlias: str = None
    Name: str = None
    RootDeviceName: str = None
    RootDeviceType: str = None
    SriovNetSupport: str = None
    VirtualizationType: str = None
    BootMode: str = None
    DepricationTime: str = None
@dataclass
class BlockDeviceMappingEBS:
    DeviceName: str = None
    SnapshotId: str = None
    VolumeSize: str = None
    DeleteOnTermination: bool = None
    VolumeType: str = None
    Encrypted: bool = None
@dataclass
class BlockDeviceMappingVirtual:
    DeviceName: str = None
    VirtualName: str = None

⬆ back to top

SSH Key Pairs

A key pair, consisting of a public key and a private key, is a set of security credentials that you use to prove your identity when connecting to an EC2 instance. Import the ec2object KeyPair and KeyPairManger to interact with SSH Key Pairs.

from ec2objects import KeyPair, KeyPairManager

KeyPair Manager

Create a keypair manager.

keypair_manager = KeyPairManager()

Retrieve Keypairs

Retrieve all of your keypairs.

list_of_keypair_objects=keypair_manager.retrieve_keypairs()

Retrieve Keypair By Name

Retrieve all of your keypairs by keypair name. AWS keypair names are unique.

list_of_keypair_objects=keypair_manager.retrieve_keypair_by_name(str:name)

Retrieve Keypair By ID

Retrieve all of your keypairs by keypair ID.

list_of_keypair_objects=keypair_manager.retrieve_keypair_by_id(str:keypairid)

Retrieve Keypair By Fingerprint

Retrieve all of your keypairs by key fingerprint.

list_of_keypair_objects=keypair_manager.retrieve_keypair_by_fingerprint(str:fingerprint)

Retrieve Keypairs By Tag Keyname

Retrieve all of your keypairs by the name of a tag key, not it's actual tag content.

list_of_keypair_objects=keypair_manager.retrieve_keypair_by_tag_keyname(str:tagkeyname)

Retrieve Keypairs By Tag

Retrieve all of your keypairs by supplying the name of a tag, and the tag value.

list_of_keypair_objects=keypair_manager.retrieve_keypair_by_tag(str:key,str:value)

Upload Keypair

AWS accepts the following ssh key types: (ssh-rsa, ssh-ed25519) Exception KeyPairTypeNotSupportedByAWS will be raised if attempt is made to upload an unsupported key type. This method will return ONE keypair object with the details of the new key uploaded to AWS.

uploadedkey = keypair_manager.upload_keypair(str:keyname, str:ssh_public_key, dict:tags)
uploadedkey = keypair_manager.upload_keypair(str:keyname, str:ssh_public_key)

where you provide tags for your keypair in the form of a dict, e.g.

tags = {}
tags["tag1"] = "value1"
tags["apple"] = "banana"

⬆ back to top

KeyPair Object

A keypair object contains the attributes, and tags for an AWS keypair.

class KeyPair:
    def __init__(self):
        self.attributes = KeyPairAttributes()
        self.tags = []
        ...
@dataclass
class KeyPairAttributes:
	KeyPairId: str = None
	KeyFingerprint: str = None
	KeyName: str = None
    KeyType: str = None
@dataclass
class Tags:
	key: str = None
	Value: str = None

Delete Keypair

You can delete a keypair from AWS by calling the delete method on your keypair_object.

keypair_object.delete()

⬆ back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ec2objects-0.0.10.tar.gz (15.8 kB view details)

Uploaded Source

File details

Details for the file ec2objects-0.0.10.tar.gz.

File metadata

  • Download URL: ec2objects-0.0.10.tar.gz
  • Upload date:
  • Size: 15.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.8.10

File hashes

Hashes for ec2objects-0.0.10.tar.gz
Algorithm Hash digest
SHA256 254db391bedef75f0cbba0fcf8740e8a50d931705ea4be0b8231d7afb1180be9
MD5 65ded9baa3cad93057ed81c9ac29a16f
BLAKE2b-256 eb6f4dd32660745abf9c56a96480d2afd5683ab356aa7179db0aa2452bbf6515

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page