Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.

Commit a598424

Browse files
dfanglalexrashed
authored andcommitted
fix the last of the commands
1 parent 14b386e commit a598424

File tree

19 files changed

+105
-106
lines changed

19 files changed

+105
-106
lines changed

content/en/docs/Integrations/aws-cli/index.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -13,38 +13,37 @@ All CLI commands that access [services that are implemented in LocalStack]({{< r
1313
There are two ways to use the CLI:
1414

1515
* Use our `awslocal` drop-in replacement:
16-
```
17-
awslocal kinesis list-streams
18-
```
16+
{{< command >}}
17+
$ awslocal kinesis list-streams
18+
{{< / command >}}
1919
* Configure AWS test environment variables and add the `--endpoint-url=<localstack-url>` flag to your `aws` CLI invocations.
2020
For example:
21-
```
22-
export AWS_ACCESS_KEY_ID="test"
23-
export AWS_SECRET_ACCESS_KEY="test"
24-
export AWS_DEFAULT_REGION="us-east-1"
21+
{{< command >}}
22+
$ export AWS_ACCESS_KEY_ID="test"
23+
$ export AWS_SECRET_ACCESS_KEY="test"
24+
$ export AWS_DEFAULT_REGION="us-east-1"
2525

26-
aws --endpoint-url=http://localhost:4566 kinesis list-streams
27-
```
26+
$ aws --endpoint-url=http://localhost:4566 kinesis list-streams
27+
{{< / command >}}
2828

2929
## AWS CLI
3030

3131
Use the below command to install `aws`, if not installed already.
3232

33-
```
34-
pip install awscli
35-
```
33+
{{< command >}}
34+
$ pip install awscli
35+
{{< / command >}}
3636

3737
### Setting up local region and credentials to run LocalStack
3838

3939
aws requires the region and the credentials to be set in order to run the aws commands.
4040
Create the default configuration and the credentials.
4141
Below key will ask for the Access key id, secret Access Key, region & output format.
42+
Config & credential file will be created under ~/.aws folder
4243

43-
```
44-
aws configure --profile default
45-
46-
# Config & credential file will be created under ~/.aws folder
47-
```
44+
{{< command >}}
45+
$ aws configure --profile default
46+
{{< / command >}}
4847

4948
{{< alert >}}
5049
**Note** Please use `test` as value for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to make pre-signed URLs for S3 buckets work.

content/en/docs/Integrations/pulumi/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@ This guide follows the instructions from Pulumi's [Get Started with Pulumi and A
2222

2323
First, run the following commands and follow the instructions in the CLI to create a new project.
2424

25-
```
26-
mkdir quickstart && cd quickstart
27-
pulumi new aws-typescript
28-
```
25+
{{< command >}}
26+
$ mkdir quickstart && cd quickstart
27+
$ pulumi new aws-typescript
28+
{{< / command >}}
2929

3030
We use the default configuration values:
3131

content/en/docs/Integrations/spring-cloud-function/index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -79,16 +79,16 @@ install the Gradle build tool on your machine.
7979

8080
Then run the following command to initialize a new Gradle project
8181

82-
```shell
83-
gradle init
84-
```
82+
{{< command >}}
83+
$ gradle init
84+
{{< / command >}}
8585

8686
After initialization, you will find the Gradle wrapper script `gradlew`.
8787
From now on, we will use the wrapper instead of the globally installed Gradle binary:
8888

89-
```
90-
./gradlew <command>
91-
```
89+
{{< command >}}
90+
$ ./gradlew <command>
91+
{{< / command >}}
9292

9393
### Project Settings
9494

content/en/docs/Local AWS Services/elastic-container-registry/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ description: >
88

99
A basic version of Elastic Container Registry (ECR) is available to store application images. ECR is often used in combination with other APIs that deploy containerized apps, like ECS or EKS.
1010

11-
```
12-
$ awslocal ecr create-repository --repository-name repo1
11+
{{< command >}}
12+
$ $ awslocal ecr create-repository --repository-name repo1
1313
{
1414
"repository": {
1515
"repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/repo1",
@@ -18,10 +18,10 @@ $ awslocal ecr create-repository --repository-name repo1
1818
"repositoryUri": "localhost:4510/repo1"
1919
}
2020
}
21-
```
21+
{{< / command >}}
2222

2323
You can then build and tag a new Docker image, and push it to the repository URL (`localhost:4510/repo1` in the example above):
24-
```
24+
{{< command >}}
2525
$ cat Dockerfile
2626
FROM nginx
2727
ENV foo=bar
@@ -36,4 +36,4 @@ fe08d5d042ab: Pushed
3636
f2cb0ecef392: Pushed
3737
latest: digest: sha256:4dd893a43df24c8f779a5ab343b7ef172fb147c69ed5e1278d95b97fe0f584a5 size: 948
3838
...
39-
```
39+
{{< / command >}}

content/en/docs/Local AWS Services/elastic-kubernetes-service/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Please note that EKS requires an existing local Kubernetes installation. In rece
1212
![Kubernetes in Docker](kubernetes.png)
1313

1414
The example below illustrates how to create an EKS cluster configuration (assuming you have [`awslocal`](https://github.com/localstack/awscli-local) installed):
15-
```
15+
{{< command >}}
1616
$ awslocal eks create-cluster --name cluster1 --role-arn r1 --resources-vpc-config '{}'
1717
{
1818
"cluster": {
@@ -30,5 +30,5 @@ $ awslocal eks list-clusters
3030
"cluster1"
3131
]
3232
}
33-
```
33+
{{< / command >}}
3434
Simply configure your Kubernetes client (e.g., `kubectl` or other SDK) to point to the `endpoint` specified in the `create-cluster` output above. Depending on whether you're calling the Kubernetes API from the local machine or from within a Lambda, you may have to use different endpoint URLs (`https://localhost:6443` vs `https://172.17.0.1:6443`).

content/en/docs/Local AWS Services/elastic-mapreduce/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@ description: >
99
LocalStack Pro allows running data analytics workloads locally via the [EMR](https://aws.amazon.com/emr) API. EMR utilizes various tools in the [Hadoop](https://hadoop.apache.org/) and [Spark](https://spark.apache.org) ecosystem, and your EMR instance is automatically configured to connect seamlessly to the LocalStack S3 API.
1010

1111
To create a virtual EMR cluster locally from the command line (assuming you have [`awslocal`](https://github.com/localstack/awscli-local) installed):
12-
```
12+
{{< command >}}
1313
$ awslocal emr create-cluster --release-label emr-5.9.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=1,InstanceType=m4.large
1414
{
1515
"ClusterId": "j-A2KF3EKLAOWRI"
1616
}
17-
```
17+
{{< / command >}}
1818

1919
The commmand above will spin up one more more Docker containers on your local machine that can be used to run analytics workloads using Spark, Hadoop, Pig, and other tools.
2020

content/en/docs/Local AWS Services/elasticache/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ description: >
99
A basic version of [ElastiCache](https://aws.amazon.com/elasticache/) is provided. By default, the API is started on http://localhost:4598 and supports running a local Redis instance (Memcached support coming soon).
1010

1111
After starting LocalStack Pro, you can test the following commands:
12-
```
12+
{{< command >}}
1313
$ awslocal elasticache create-cache-cluster --cache-cluster-id i1
1414
{
1515
"CacheCluster": {
@@ -20,14 +20,14 @@ $ awslocal elasticache create-cache-cluster --cache-cluster-id i1
2020
}
2121
}
2222
}
23-
```
23+
{{< / command >}}
2424

2525
Then use the returned port number (`4530`) to connect to the Redis instance:
26-
```
26+
{{< command >}}
2727
$ redis-cli -p 4530 ping
2828
PONG
2929
$ redis-cli -p 4530 set foo bar
3030
OK
3131
$ redis-cli -p 4530 get foo
3232
"bar"
33-
```
33+
{{< / command >}}

content/en/docs/Local AWS Services/glue/index.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ In order to run Glue jobs, some additional dependencies have to be fetched from
1515
## Creating Databases and Table Metadata
1616

1717
The commands below illustrate the creation of some very basic entries (databases, tables) in the Glue data catalog:
18-
```
18+
{{< command >}}
1919
$ awslocal glue create-database --database-input '{"Name":"db1"}'
2020
$ awslocal glue create-table --database db1 --table-input '{"Name":"table1"}'
2121
$ awslocal glue get-tables --database db1
@@ -27,30 +27,30 @@ $ awslocal glue get-tables --database db1
2727
}
2828
]
2929
}
30-
```
30+
{{< / command >}}
3131

3232
## Running Scripts with Scala and PySpark
3333

3434
Assuming we would like to deploy a simple PySpark script `job.py` in the local folder, we can first copy the script to an S3 bucket:
35-
```
35+
{{< command >}}
3636
$ awslocal s3 mb s3://glue-test
3737
$ awslocal s3 cp job.py s3://glue-test/job.py
38-
```
38+
{{< / command >}}
3939

4040
Next, we can create a job definition:
41-
```
41+
{{< command >}}
4242
$ awslocal glue create-job --name job1 --role r1 \
4343
--command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}'
44-
```
44+
{{< / command >}}
4545
... and finally start the job:
46-
```
46+
{{< command >}}
4747
$ awslocal glue start-job-run --job-name job1
4848
{
4949
"JobRunId": "733b76d0"
5050
}
51-
```
51+
{{< / command >}}
5252
The returned `JobRunId` can be used to query the status job the job execution, until it becomes `SUCCEEDED`:
53-
```
53+
{{< command >}}
5454
$ awslocal glue get-job-run --job-name job1 --run-id 733b76d0
5555
{
5656
"JobRun": {
@@ -59,7 +59,7 @@ $ awslocal glue get-job-run --job-name job1 --run-id 733b76d0
5959
"JobRunState": "SUCCEEDED"
6060
}
6161
}
62-
```
62+
{{< / command >}}
6363

6464
For a more detailed example illustrating how to run a local Glue PySpark job, please refer to this [sample repository](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs).
6565

@@ -75,12 +75,12 @@ CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://tes
7575
```
7676

7777
Then this command will import these DB/table definitions into the Glue data catalog:
78-
```
78+
{{< command >}}
7979
$ awslocal glue import-catalog-to-glue
80-
```
80+
{{< / command >}}
8181

8282
... and finally they will be available in Glue:
83-
```
83+
{{< command >}}
8484
$ awslocal glue get-databases
8585
{
8686
"DatabaseList": [
@@ -112,7 +112,7 @@ $ awslocal glue get-tables --database-name db2
112112
}
113113
]
114114
}
115-
```
115+
{{< / command >}}
116116

117117
## Further Reading
118118

content/en/docs/Local AWS Services/iam/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ The environment configuration `ENFORCE_IAM=1` is required to enable this feature
1414
{{< /alert >}}
1515

1616
Below is a simple example that illustrates the use of IAM policy enforcement. It first attempts to create an S3 bucket with the default user (which fails), then create a user and attempts to create a bucket with that user (which fails again), and then finally attaches a policy to the user to allow `s3:CreateBucket`, which allows the bucket to be created.
17-
```
17+
{{< command >}}
1818
$ awslocal s3 mb s3://test
1919
make_bucket failed: s3://test An error occurred (AccessDeniedException) when calling the CreateBucket operation: Access to the specified resource is denied
2020
$ awslocal iam create-user --user-name test
@@ -32,7 +32,7 @@ $ awslocal iam create-policy --policy-name p1 --policy-document '{"Version":"201
3232
$ awslocal iam attach-user-policy --user-name test --policy-arn arn:aws:iam::000000000000:policy/p1
3333
$ awslocal s3 mb s3://test
3434
make_bucket: test
35-
```
35+
{{< / command >}}
3636

3737
### Supported APIs
3838

content/en/docs/Local AWS Services/iot/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@ description: >
99
Basic support for [IoT](https://aws.amazon.com/iot/) (including IoT Analytics, IoT Data, and related APIs) is provided in the Pro version. The main endpoints for creating and updating entities are currently implemented, as well as the CloudFormation integrations for creating them.
1010

1111
The IoT API ships with a built-in MQTT message broker. In order to get the MQTT endpoint, the `describe-endpoint` API can be used; for example, using [`awslocal`](https://github.com/localstack/awscli-local):
12-
```
12+
{{< command >}}
1313
$ awslocal iot describe-endpoint
1414
{
1515
"endpointAddress": "localhost:4520"
1616
}
17-
```
17+
{{< / command >}}
1818

1919
This endpoint can then be used with any MQTT client to send/receive messages (e.g., using the endpoint URL `mqtt://localhost:4520`).
2020

0 commit comments

Comments
 (0)