# Extending EBS volume size

If you maintain the long-running EC2 instance, you may have encountered the situation where you find the initial EBS storage volume insufficient. It may be caused by the log volume increasing rapidly than expected. Or you may put many resources in the instance manually.

Either way, we need to remove the unnecessary files or recreate the instance if possible. If it’s not the case, the only way we can do this is to extend the EBS volume.

This article aims to convey how to increase the EBS volume size at runtime with no downtime of EC2 instances.

## Modify EBS volume

The first step is to modify the EBS volume attached to the target instance whose volume we want to increase. After selecting the volume, click the Modify Volume from the actions pane.

We can set an arbitrary number of volume sizes from there. But, unfortunately, it takes a while to complete the optimization. So let’s wait for a few minutes.

But even after the optimization completion, the thing has not been done. We need to reconfigure the partition and file-system on the volume.

## Extending Linux File System

We must use the file-system specific command to extend the file system to a larger size. Although the command is dependent on the file system you use, we assume ext4 here.

First, we check the name of the root file system on your instance.

$df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 ext4 8.0G 1.9G 6.2G 24% /  1.9G capacity is already occupied in /dev/xvda1. Now let’s say we already increase the EBS volume size for this root device to 16G from 8G. The system does not correctly recognize the latest volume size. It’s necessary to extend the partition manually to let the system know the latest volume size. The lsblk command shows the partition information. $ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  16G  0 disk
└─xvda1 202:1    0   8G  0 part /


The root volume /dev/xvda has 16G capacity, and one partition /dev/xvda1 occupied 8G out of that. Therefore, we can increase the size of the partition by running the following command.

$sudo growpart /dev/xvda 1$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  16G  0 disk
└─xvda1 202:1    0  16G  0 part /


We also need to extend the file system on that volume. resize2fs is available to extend the ext4.

$sudo resize2fs /dev/xvda1$ df -h
Filesystem       Size  Used Avail Use% Mounted on
/dev/xvda1        16G  1.9G  14G  12% /


Now we get all things done!

# Elasticsearch Aggregation with Python DSL

When I write some trivial code to manipulate the Elasticsearch cluster, one question jumps into my head.

“Does aggregation take into account all matched documentations even if we specify the from and size parameters?”

For instance, we have 100 documents in Elasticsearch and run a query only matching 50 documents within that index with a size limit of 10. (e.g., pagination) How does the aggregation work? Does the aggregation value work on the whole documentation collection or just what is on the current page?

## Prerequisites

Let’s say we have the following documents in the index.

[
{
"title": "Title1",
"author": "SomeAuthor",
"contents": "Content1",
"published_at": "2021-01-01"
},
{
"title": "Title2",
"author": "SomeAuthor",
"contents": "Content2",
"published_at": "2021-01-02"
},
{
"title": "Title3",
"author": "AnotherAuthor",
"contents": "Content3",
"published_at": "2021-01-03"
},
{
"title": "Title4",
"author": "AnotherAuthor",
"contents": "Content4",
"published_at": "2021-01-04"
},
{
"title": "Title5",
"author": "AnotherAuthor",
"contents": "Content5",
"published_at": "2021-01-05"
},
]


A query I’ve written by using elastcisearch-dsl looks as follows:

from elasticsearch_dsl import Search, Q, A

search_query = Search(using=client, index=index_name)
search_query = search_query.filter(
"range", published_at={"gte": "2021-01-01", "lte": "2021-01-03"}
)


This search should match three documents in the index, Title1, Title2 and Title3. Okay, now let’s add an aggregation to count the documents by the author’s name.

## Aggregation by Author

The following code generates a query to count the documents by author.

author_count = A("terms", field="author")
search_query.aggs.bucket("author_count", author_count)


The response will look like this,

{
"aggs": {
"author_count": {
"buckets": [
{"key": "SomeAuthor", "doc_count": 2},
{"key": "AnotherAuthor", "doc_count" :1}
]
}
}
}


The document count reflects the context of the search query. It aggregates the value from the set of documents matching the given query. What will happen if we add the size parameter?

## From and Size

elasticsearch-dsl allows us to set the parameter for the pagination. The way to do so is even more Pythonistas! It uses a slice of the list in Python.

search_query = search_query[0:3]


It adds the following parameters in the request and omits the last document we’ve seen previously.

 {
"from": 0,
"size": 2
}


How about the aggregation value? As we expect, it remains unchanged. The from and size parameters are designed to be used for the pagination. Aggregation values should not be affected because users may want to show the metrics or statistics of the whole population, not the documents on the page. Therefore we can use the aggregation value without caring much which page we are now located in.

# How to deal with 'Failed to ping backend API' in Docker

After I upgraded the docker to the latest version, I constantly face the following error.

According to the instruction, I clicked some buttons, but it turns out to be in vain. The dialog did not display any response at all. There is no way to fix the problem and make it disappear other than restarting the machine. It is so stressful to see the error keeps showing up every time I launched the laptop.

Although the issue is already discussed here, it’s not resolved yet. It seems to be the bug of the Docker engine installed in the macOS machine.

How can we deal with the situation?

# Restarting Docker

The easiest and most effective way I have found was restarting the docker process by force. For example, just running the following command will remove the dialog and relaunch the process without any trouble.

\$ killall Docker && cd /Applications;open -a Docker;cd ~


Every time I see the error message, I quickly run the command and continue to start on my work :)

For your reference, my docker engine version is v20.10.7. macOS is 10.15.17.

# Note for string compatible type conversion in C++

Type conversion can be the most googled material in daily programming regardless of the kind of language. That is also the case when writing C++ code. For example, I often forget how to convert the std::string to char * and vice versa. llvm::StringRef brings additional complexity definitely into this conversion graph between string compatible types in C++.

This article is a brief note on how to move back and forward among these three data types so that we can later refer to them as necessary.

## std::string -> char *

It’s pretty simple. std::string has a method to return the pointer to the underlying character entities. c_str() allows us to do so.

std::string str = "Hello, World";
const char *c = str.c_str();


## char * -> std::string

std::string has a constructor that takes the const char* type. Thus, it enables you to create std::string from the char * type.

const char *c = "Hello, World";
std::string str(c);


## llvm::StringRef -> std::string

llvm::StringRef has a method to return the string entity, str().

llvm::StringRef stringRef("Hello, World");
std::string str = stringRef.str();


## llvm::StringRef -> char *

llvm::StringRef has a method to return the underlying data pointer. data() method will do that.

llvm::StringRef stringRef("Hello, World");
const char *c = stringRef.data();

llvm::StringRef stringRef("Hello, World");
std::string str = stringRef.str();


## std::string, char * -> llvm::StringRef

We can construct llvm::StringRef from both types of std::string and char * by its constructor.

std::string str = "Hello, String";
const char *c = "Hello, Char";
llvm::StringRef stringRef1(str);
llvm::StringRef stringRef2(c);


# True Cause behind Additional Verification in ACM

AWS Certificate Manager (ACM) is a service allowing us to manage the complexity around SSL/TLS certificates such as creating, storing, and renewing. ACM handles almost all operational complexity on our behalf to concentrate on the essential application development. That is a massive benefit of using the service if you want to provide a safe web service using SSL/TLS. (Of course, all websites should use SSL/TLS as default)

The other day, I encountered a situation where ACM showed up the error message like:

Request failed The status of this certificate request is “Failed”. Additional verification is required to request certificates for one or more domain names in this request.

The certificate request failed. What’s that? I usually pass the verification process without any trouble. So what do I need to do to deal with the additional verification process?

The forum gave me a clear answer.

Usually, this error appears when an ACM certificate request contains a domain listed under the Alexa Top 1000 domains. This process is in place to prevent abuse.

Indeed, the target domain I have requested the certificate for is listed in the top 1000 in the Alex ranking at the time. :) Therefore, we rarely see such a situation unless you have an extensive popular domain in the world.

The only way to resolve the issue is to file a support ticket to ask AWS to put our domain in the whitelist. That seems to work as the additional verification in this case. AWS support team will promptly respond to your problem, and your certificate will be issued once the ticket is closed.