amazon rds 性能
by Kangze Huang
黄康泽
Amazon S3 —云文件存储可提高性能并节省成本 (Amazon S3 — Cloud File Storage for Performance and Cost Savings)
完整的AWS Web样板-教程2 (The Complete AWS Web Boilerplate — Tutorial 2)
目录 (Table of Contents)
Part 0: Introduction to the Complete AWS Web Boilerplate
第0部分: 完整的AWS Web Boilerplate简介
Part 1: User Authentication with AWS Cognito (3 parts)
第1部分: 使用AWS Cognito进行用户身份验证 (3部分)
Part 2: Saving File Storage Costs with Amazon S3 (1 part)
第2部分: 使用Amazon S3节省文件存储成本 (第1部分)
Part 3: Sending Emails with Amazon SES (1 part)
第3部分: 使用Amazon SES发送电子邮件 (第1部分)
Part 4: Manage Users and Permissions with AWS IAM [Coming Soon]
第4部分:使用AWS IAM管理用户和权限[即将推出]
Part 5: Cloud Server Hosting with AWS EC2 and ELB[Coming Soon]
第5部分:使用AWS EC2和ELB托管云服务器[即将推出]
Part 6: The MongoDB Killer: AWS DynamoDB [Coming Soon]
第6部分:MongoDB杀手:: AWS DynamoDB [即将推出]
Part 7: Painless SQL Scaling using AWS RDS [Coming Soon]
第7部分:使用AWS RDS进行无痛SQL扩展[即将推出]
Part 8: Serverless Architecture with Amazon Lambda [Coming Soon]
第8部分:使用Amazon Lambda的无服务器架构[即将推出]
Download the Github here.
在此处下载Github。
介绍 (Introduction)
Traditionally, files served to an app would be saved to a server’s filesystem and the architecture designed by a developer. We can immediately see that this is costly in terms of labor and a business risk as we must rely on the expertise/design of a developer(s). This is also costly in terms of bandwidth, as every file must be transferred from the server to the client. If we keep the file system on the main server, it will slow down processing of all the core functionality. If we separate the file system into its own server, we must pay extra for the uptime this server runs on, as well as devise a way to access images reliably even when URLs change. And how about different file types? We need to write code to handle JPGs, MP4s, PDFs, ZIP files..etc. How about security and restricted access to only authorized users? Security is a monumental task in itself. Finally, if we want all this to scale we will have to pay out of the ass for it. What if there were a way to achieve all this production level functionality easily and cost effectively?
传统上,提供给应用程序的文件将保存到服务器的文件系统和开发人员设计的体系结构中。 我们立即可以看到,这在人工和业务风险方面都是昂贵的,因为我们必须依靠开发人员的专业知识/设计。 就带宽而言,这也是昂贵的,因为每个文件都必须从服务器传输到客户端。 如果我们将文件系统保留在主服务器上,它将减慢所有核心功能的处理速度。 如果将文件系统分离到其自己的服务器中,则必须为此服务器运行的正常运行时间支付额外费用,并且必须设计一种即使URL更改也能可靠地访问图像的方法。 以及不同的文件类型呢? 我们需要编写代码来处理JPG,MP4,PDF,ZIP文件等。 安全性和仅授权用户的受限访问情况如何? 安全本身就是一项艰巨的任务。 最后,如果我们希望所有这些都可以扩展,我们将不得不为此付出代价。 如果有一种方法可以轻松,经济地实现所有这些生产级功能,该怎么办?
Introducing Amazon Simple Storage Service (S3) — A fully managed file storage system that you can reliably use at scale, and secure right out of the box. Your files are automatically stored in multiple physical locations for guaranteed availability, even if one storage center fails. Everything is handled for you so that all you need to do to access your content is provide the URL (and be an authorized user if applicable). S3 is a bargin because S3 bandwidth/storage is a lot cheaper than EC2 bandwidth/storage: to store 10GB of images with 30GB data transfer-out and 1 million GET requests comes out to a monthly total of… $1.89 USD. Wow. Let’s get started.
Amazon Simple Storage Service(S3)简介—一个完全托管的文件存储系统,您可以可靠地大规模使用它,并可以立即使用。 即使一个存储中心发生故障,您的文件也会自动存储在多个物理位置,以确保可用性。 一切都已为您处理,因此访问内容所需要做的就是提供URL(并在适用时成为授权用户)。 S3很便宜,因为S3带宽/存储比EC2带宽/存储便宜得多:要存储10GB的图像以及30GB的数据输出和100万个GET请求,每月的总费用为1.89美元。 哇。 让我们开始吧。
最初设定 (Initial Setup)
Click the AWS Console icon (the cube) at the top left hand corner and search for S3.
单击左上角的AWS控制台图标(多维数据集),然后搜索S3。
At the S3 page, click the “Create Bucket” button. Name your S3 bucket something unique, as you cannot have the same bucket name as any other S3 bucket on the internet. Also choose a region closest to where your users will reside so that connection speeds are fastest.
在S3页面上,单击“创建存储桶”按钮。 将您的S3存储桶命名为唯一的名称,因为您不能与互联网上的任何其他S3存储桶使用相同的存储桶名称。 另外,请选择一个最接近您的用户将居住的区域,以使连接速度最快。
On the next S3 management screen, click Permissions and “Add more permissions”. In the dropdown menu, select “Everyone”. This will make your S3 bucket and all its contents publicly accessible.
在下一个S3管理屏幕上,单击“权限”,然后单击“添加更多权限”。 在下拉菜单中,选择“所有人”。 这将使您的S3存储桶及其所有内容可以公开访问。
If we want to more fine-tune who has access to our S3 bucket, we can create a bucket policy. Click the “Add bucket policy” button and then “AWS Policy Generator” in the bottom left hand corner. The policy text you see below is the output of the policy generator.
如果我们想微调谁可以访问我们的S3存储桶,则可以创建存储桶策略。 单击“添加存储桶策略”按钮,然后单击左下角的“ AWS策略生成器”。 您在下面看到的策略文本是策略生成器的输出。
Two things to note when you are generating your policy. “Action” refers to a functionality that is allowed to be done on this S3 bucket, in this case deleting, creating and viewing objects. “Principal” refers to the entity allowed to do that action, such as a Cognito User’s IAM role (the example here uses the Cognito_App_UsersAuth_Role
created in my AWS Cognito Tutorial). The “Principal” is referred to by its ARN identifier which you can find on the info page for that principal, and follows the format of arn:aws:<AWS_SERVICE>:::<UNIQUE_ID
ENTIFIER>. If you look at the ARN for the “Principal” or “Resource”, you will find a similar pattern. Finally, “Resource” refers to the S3 bucket object that this policy applies to, identified again by its ARN. In this case our “Resource” is our S3 bucket fo
llowed by /* to indicate all child objects of our S3 bucket. You can always add more policy rules by adding another statement object inside t
he Statement array.
生成策略时需要注意两件事。 “动作”是指允许在此S3存储桶上完成的功能,在这种情况下,该功能是删除,创建和查看对象。 “主体”是指允许执行该操作的实体,例如Cognito用户的IAM角色(此处的示例使用在我的AWS Cognito教程中创建的Cognito_App_UsersAuth_Role
)。 “主体”由其ARN标识符引用,您可以在该信息的信息页面上找到该主体,并采用arn:aws:<AWS_SERVICE>:::<UNIQUE_ID
ENTIFIER>的格式。 如果您查看“主体”或“资源”的ARN,则会发现类似的模式。 最后,“资源”指的是此策略适用的S3存储桶对象,由其ARN再次标识。 在这种情况下,我们的“资源”是我们的S3存储fo
llowed由/ *来表示我们的S3存储的所有子对象。 您始终可以通过inside t
Statement数组inside t
添加另一个Statement对象来添加更多策略规则。
One last thing we must set-up is the S3 bucket CORS configuration. If we want websites to be able to access our S3 bucket resources without security complaints, we must specify which http actions are allowed. If you don’t know what CORS is, read up on it here.
我们必须设置的最后一件事是S3存储桶CORS配置。 如果我们希望网站能够访问我们的S3存储桶资源而没有安全投诉,则必须指定允许哪些http操作。 如果您不知道CORS是什么,请在此处阅读。
So this is pretty straightforward, the <AllowedOrigin>*</Allow
edOrigin> means our http request can come from anywhere. If we wanted to only allow requests from a certain IP address (such would be the case in production), we would have <AllowedOrigin>10.67.53.
55</AllowedOrigin>. Next, the <Allowe
dMethod>GET</AllowedMethod> specifies that GET requests are allowed. We can specify more allowed methods, or if we enjoy living dangerously, we can
do <AllowedMethod>*</AllowedMethod&
gt;. Finally, <AllowedHeaders>*</AllowedHeaders> allows any header, such as OPTION, to be authorized communication with this S3 bucket. If we want
to add more rules, simply add another <CORSRule></CORSRule>. Simple isn’t it? If you need more examples, click “Sample CORS Configurations” on the bottom left hand corner.
所以这很简单, <AllowedOrigin>*</Allow
edOrigin>意味着我们的http请求可以来自任何地方。 如果我们只想允许来自某个IP地址的请求(在生产环境中就是这种情况), would have <AllowedOrigin>10.67.53.
55 </ Allow edOrigin>. Next, the <Allowe
edOrigin>. Next, the <Allowe
dMethod> GET </ AllowedMethod>指定允许GET请求。 我们可以指定更多允许的方法,或者如果我们enjoy living dangerously, we can
执行<All owedMethod>*</AllowedMethod&
gt;。 最后,<AllowedHeaders> * </ AllowedHeaders>允许任何标头(例如OPTION)与此S3 bucket. If we want
进行授权通信S3 bucket. If we want
S3 bucket. If we want
添加更多规则,只需添加另一个<CORSRule> </ CORSRule>。 是不是很简单? 如果需要更多示例,请单击左下角的“示例CORS配置”。
Ok, we’re almost ready to dive into the code!
好的,我们几乎可以开始研究代码了!
S3快速简介 (A Quick S3 Briefing)
Recall that only users authenticated through AWS Cognito are able to modify (upload or delete) files, whereas all users are able to view files. This boilerplate will start with the uploading of files, after an authenticated user is logged into our app via AWS Cognito. If you don’t know how to do this with AWS Cognito, check out the previous tutorials. If your use case allows all users to modify files, then just make sure your S3 permissions match that. The code is otherwise the same, so let’s get started!
回想一下,只有通过AWS Cognito认证的用户才能修改(上传或删除)文件,而所有用户都可以查看文件。 通过AWS Cognito将经过身份验证的用户登录到我们的应用程序后,此样板将从文件上传开始。 如果您不知道如何使用AWS Cognito进行此操作,请查看以前的教程。 如果您的用例允许所有用户修改文件,则只需确保您的S3权限与之匹配即可。 否则代码是一样的,让我们开始吧!
Amazon S3 is a raw key-value store of files, which means each file has a name, and a raw data value for that name. Technically this means we can store any type of file on S3, but there are some limitations as defined in their Amazon Web Service Licensing Agreement that are mostly restrictions regarding malicious activity. The maximum size of an individual file on S3 is 5 terabytes, and the max size of a single PUT request is 5 gigabytes. Aside from this, what we can store on S3 is limitless. In S3, folders are also objects but with a null-like value, as their purpose is for organization. S3 folders cannot be re-named, and if changed from private to public cannot be changed back. Unlike a typical file system, S3 has a flat hierarchy which means a file that resides inside a folder is technically on the same level as the folder — everything is one level deep. S3 simply uses filename prefixes to distinguish folder hierarchy. For example, a file called “panda.jpg” inside the folder “ChinaTrip” will actually have a filename “ChinaTrip/panda.jpg” in S3. This is Amazon’s simple but effective solution to having folder hierarchies while keeping the benefits of a simple 1-layer deep key-value store. That’s all for the briefing, let’s get started on the code!
Amazon S3是文件的原始键值存储,这意味着每个文件都有一个名称,以及该名称的原始数据值。 从技术上讲,这意味着我们可以在S3上存储任何类型的文件,但是在其Amazon Web Service许可协议中定义了一些限制,其中大多数是关于恶意活动的限制。 S3上单个文件的最大大小为5 TB,单个PUT请求的最大大小为5 GB。 除此之外,我们可以在S3上存储的内容是无限的。 在S3中,文件夹也是对象,但具有类似空值的值,因为它们的用途是组织。 S3文件夹无法重命名,并且如果从私人更改为公共也无法更改。 与典型的文件系统不同,S3具有平坦的层次结构,这意味着驻留在文件夹内的文件在技术上与该文件夹位于同一级别上-每一层都深。 S3仅使用文件名前缀来区分文件夹层次结构。 例如,文件夹“ ChinaTrip”中名为“ panda.jpg”的文件实际上在S3中将具有文件名“ ChinaTrip / panda.jpg”。 这是Amazon具有文件夹层次结构的简单而有效的解决方案,同时保留了简单的1层深度键值存储的优点。 简报就这些了,让我们开始编写代码!
代码 (The Code)
In the boilerplate front-end, go to App/src/api/aws/aws_s3.js
. What we first notice is that we are importing an S3 bucket name from App/src/api/aws/aws_profile.js
. Make sure in aws_profile.js
you are exporting a bucket name like so:
在样板前端中,转到App/src/api/aws/aws_s3.js
。 首先要注意的是,我们正在从App/src/api/aws/aws_profile.js
导入S3存储桶名称。 确保在aws_profile.js
您正在导出存储桶名称,如下所示:
export const BUCKET_NAME = 'kangzeroos-s3-tutorial'
export const BUCKET_NAME = 'kangzeroos-s3-tutorial'
And then import it in App/src/api/aws/aws_cognito.js
like so:
然后将其导入App/src/api/aws/aws_cognito.js
如下所示:
import {BUCKET_NAME} from './aws_profile'
Now lets continue on aws_cognito.js
and run through the first function we will be using.
现在让我们继续aws_cognito.js
并运行我们将要使用的第一个函数。
创建用户相册 (Create A User Album)
Imagine your users upload photos for whatever purpose. You would want to organize the images that your users upload in folders that represent each user. This is the purpose of createUserS3Album()
which creates an S3 folder named from its only argument albumName
— in the case of this boilerplate and its integration with AWS Cognito, the albumName
will be the user’s email. Let’s walk through the function.
想象一下,您的用户出于任何目的上传照片。 您希望将用户上载的图像组织在代表每个用户的文件夹中。 这是createUserS3Album()
的目的,该方法创建一个S3文件夹,该文件夹使用其唯一的参数albumName
命名-在此样板及其与AWS Cognito集成的情况下, albumName
将是用户的电子邮件。 让我们来看一下该函数。
export function createUserS3Album(albumName){ const p = new Promise((res, rej)=>{ AWS.config.credentials.refresh(function(){ const S3 = new AWS.S3() if (!albumName) { const msg = 'Please provide a valid album name' rej(msg) return } albumName = albumName.trim(); if (albumName.indexOf('/') !== -1) { const msg = 'Album names cannot contain slashes.' rej(msg) return }
const albumKey = encodeURIComponent(albumName) + '/'; const params = { Bucket: BUCKET_NAME, Key: albumKey } S3.headObject(params, function(err, data) { if (!err) { const msg = 'Album already exists.' res() return } if (err.code !== 'NotFound') { const msg = 'There was an error creating your album: ' + err.message rej() return } if(err){ const albumParams = { ...params, ACL: "bucket-owner-full-control", StorageClass: "STANDARD" } S3.putObject(params, function(err, data) { if (err) { const msg = 'There was an error creating your album: ' + err.message rej(msg) return } res('Successfully created album.'); }); } }); }) }) return p}
At a high level, this is the process. We first refresh the Amazon credentials that AWS Cognito provided for us. This is only needed in the case that your S3 bucket security is set up so that only logged in AWS Cognito users can upload files. If your use case allows for anyone to post, then you won’t need to refresh the Amazon credentials. In the boilerplate, createUserS3Album()
is called each time a user logs in.
从总体上讲,这是过程。 我们首先刷新AWS Cognito为我们提供的Amazon凭证。 仅在设置了S3存储桶安全性以便只有登录的AWS Cognito用户可以上传文件的情况下才需要这样做。 如果您的用例允许任何人发布,则您无需刷新Amazon凭证。 在样板中,每次用户登录时都会调用createUserS3Album()
。
Next we instantiate the S3 object and check for the existence of an albumName
. We continue by URI-encoding the albumName
into albumKey
, which is needed if albumName
comes from an email address, as S3 will not accept characters like /
and @
in a filename.
接下来,我们实例化S3对象并检查是否存在albumName
。 我们继续通过URI编码albumName
到albumKey
,如果albumName
来自电子邮件地址,这是必需的,因为S3将不接受文件名中的/
和@
等字符。
Finally we can take albumKey and BUCKET_NAME to call S3.headObject(). Inside headObject()
we check if the albumKey
already exists or if we get an error code. If all is good, then we call S3.putObject()
with the albumKey
. Upon successful creation of albumKey
, we can resolve the promise and complete the function.
最后,我们可以使用albumKey和BUCKET_NAME来调用S3.headObject()。 在headObject()
内部,我们检查albumKey
已经存在或是否获得错误代码。 如果一切都很好,我们称S3.putObject()
与albumKey
。 成功创建albumKey
,我们可以解决诺言并完成功能。
将文件上传到S3 (Upload Files to S3)
Now let’s cover how to upload actual files. In the boilerplate we use images, but the same concepts apply to any file. The function requires 2 arguments: the albumName
(which in the boilerplate is a user’s email
), and an array of the files to be uploaded. Let’s walk through the process.
现在让我们介绍如何上传实际文件。 在样板中,我们使用图像,但是相同的概念适用于任何文件。 该函数需要2个参数: albumName
(在样板中是用户的email
)和要上传的文件数组。 让我们逐步完成该过程。
export function uploadImageToS3(email, files){ const p = new Promise((res, rej)=>{ if (!files.length || files.length == 0) { const msg = 'Please choose a file to upload first.' rej(msg) } AWS.config.credentials.refresh(function(){ const S3 = new AWS.S3()
const S3ImageObjs = [] let uploadedCount = 0
for(let f = 0; f<files.length; f++){ const file = files[f]; const fileName = file.name; const albumPhotosKey = encodeURIComponent(email) + '/'; const timestamp = new Date().getTime()/1000
const photoKey = albumPhotosKey + "--" + timestamp + "--" + fileName; S3.upload({ Bucket: BUCKET_NAME, Key: photoKey, Body: file, ACL: 'public-read' }, function(err, data) { if (err) { const msg = 'There was an error uploading your photo: '+ err.message rej(msg) return } const msg = 'Successfully uploaded photo: ' + fileName
S3ImageObjs.push({ photoKey: photoKey, url: data.Location }) uploadedCount++ if(uploadedCount==files.length){ res(S3ImageObjs) } }) } }) }) return p}
First we check that files
actually has an array of items inside it. Then we again refresh the AWS credentials and instantiate the S3 object. Now we use a for-loop to loop through all the files
and one by one upload them to S3. At the last file we resolve the promise with an array of all the uploaded files S3ImageObjs
. So what is the for-loop doing?
首先,我们检查files
确实包含一组项目。 然后,我们再次刷新AWS凭证并实例化S3对象。 现在,我们使用for循环遍历所有files
然后将它们一个接一个地上传到S3。 在最后一个文件中,我们使用所有已上传文件S3ImageObjs
的数组来解决S3ImageObjs
。 那么for循环在做什么?
Each file is named with albumName
(which in this case is an URI-encoded email
) as a prefix, then timestamped, and then appended with the files’ original filename. The end name is the photoKey
. Then we call S3.upload()
with the correct params, and upon successful upload, we push the result into the S3ImageObjs
array. A successful upload will return an object with a Location
property that is a string url for accessing that file. If we visit the Location
url, we will see our uploaded images. One last thing to note is the ACL
property in S3.upload()
. ACL
is set to ‘public-read’
so that the file is publicly accessible by all.
每个文件都以albumName
(在本例中为URI编码的email
)作为前缀命名,然后加上时间戳,然后附加文件的原始文件名。 最终名称是photoKey
。 然后,使用正确的参数调用S3.upload()
,并在成功上传S3ImageObjs
结果推送到S3ImageObjs
数组中。 成功上传将返回一个具有Location
属性的对象,该属性是用于访问该文件的字符串URL。 如果我们访问Location
网址,我们将看到我们上传的图像。 最后要注意的是S3.upload()
的ACL
属性。 ACL
设置为'public-read'
因此所有人均可公开访问该文件。
剩下的东西 (The Rest of The Stuff)
Great, so we have file reading and posting (GET & POST) completed for our boilerplate. What about updating and deleting? Well, updating is a matter of replacing a previous file and follows a similar POST process. Deleting is a simple matter of calling S3.deleteObject()
with the photoKey
and bucket name.
太好了,因此我们已经完成了样板文件的读取和发布(GET&POST)。 关于更新和删除呢? 好吧,更新是替换以前的文件并遵循类似的POST过程的问题。 删除是使用photoKey
和存储桶名称调用S3.deleteObject()
的简单问题。
const params = { Bucket: 'STRING_VALUE', Key: 'STRING_VALUE' };
s3.deleteObject(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response });
And that’s it! The basics of Amazon S3 with coverage on security and auth-integration. For the majority of your usage cases, this will be all you need. That was pretty straightforward, and wow do we get a lot of benefit using a raw file storage over traditional file-systems on our main server. I hope this article has convinced you of S3's benefits and how to implement it in your own app.
就是这样! Amazon S3的基础知识,涵盖安全性和身份验证集成。 对于大多数使用情况,这就是您所需要的。 这非常简单,而且与原始服务器上的传统文件系统相比,使用原始文件存储可以带来很多好处。 我希望本文使您相信S3的好处以及如何在自己的应用程序中实现它。
See you in the next article of this series!
在本系列的下一篇文章中见!
These methods were partially used in the deployment of renthero.ca
这些方法部分地用于了renthero.ca的部署中
翻译自: https://www.freecodecamp.org/news/amazon-s3-cloud-file-storage-for-performance-and-cost-savings-8f38d7769619/
amazon rds 性能