Public accessibility to data in the managed data warehouse service has been disabled.
Amazonโs security improvements for its AWS Redshift managed data warehouse service are welcome additions, says an expert.
Loris Degioanni, chief technology officer at Sysdig, told InfoWorld that AWSโs enhanced security defaults for Amazon Redshift are a โnecessary evolution for the accelerated cloud adoption weโve seen across organizations with varying security expertise. Secure configurations are the first line of defense, and by enforcing best practices from day one, these changes reinforce the shift-left mindset Sysdig has long championed. However, security doesnโt stop at strong defaults โ continuous monitoring, risk prioritization, and real-time threat detection are critical.โ
Redshift allows organizations to store and analyze their data using their choice of business intelligence tools. According to HGInsights, just over 27,700 organizations use Redshift.
The changes
The three changes are in Redshiftโs default settings for newly created clusters, Redshift Serverless workgroups, and clusters restored from snapshots:
- Public accessibility to data has been disabled. Newly-created clusters will be accessible only within a customerโs virtual private cloud [VPC]. If an administrator needs public access, they must explicitly override the default and set theย โPubliclyAccessibleโย parameter toย โtrueโ when running theย โCreateClusterโย orย โRestoreFromClusterSnapshotโย API operations. Note that if Redshift applications are in a VPC other than Amazonโs, customers can configure cross-VPC access.
With a publicly accessible cluster, AWS recommends admins always use security groups or network access control lists (network ACLs) to restrict access; - Database encryption is enabled by default. In other words, the ability to create unencrypted clusters in the Redshift console is gone. ย When an admin uses the console, CLI, API, or CloudFormation to create a provisioned cluster without specifying anย AWS Key Management Service (AWS KMS)ย key, the cluster will automatically be encrypted with an AWS-owned key. The AWS-owned key is managed by AWS.
- Secure connections are enforced by default. This enforces encryption of communication between a customerโs applications and the Amazon Redshift data warehouse to help protect the confidentiality and integrity of the data being transmitted between the customerโs applications and Amazon Redshift.
A new default parameter group named โdefault.redshift-2.0โ is created for newly created or restored clusters, with theย โrequire_sslโ parameter set to โtrueโ by default. New clusters created without a specified parameter group will automatically use theย โdefault.redshift-2.0โย parameter group, which will be automatically selected in the Redshift console. This change will also be reflected in theย โCreateClusterโย andย โRestoreFromClusterSnapshotโ API operations, as well as in the corresponding console, AWS CLI, and AWS CloudFormation operations.
For customers using existing or custom parameter groups, Redshift will continue to honor theย โrequire sslโ value specified in the customerโs parameter group. However, AWS recommends that admins update this parameter toย โtrueโย to enhance the security of connections. Admins still have the option to change thisย value in their custom parameter groups as needed. The procedure is outlined inย the Amazon Redshift Management Guideย for configuring security options for connections.
Amazon also noted that those creating unencrypted clusters by using automated scripts or using data sharing with unencrypted clusters could be impacted by the changes. โIf you regularly create new unencrypted consumer clusters and use them for data sharing, review your configurations to verify that the producer and consumer clusters are both encrypted to reduce the chance that you will experience disruptions in your data-sharing workloads,โ it advised.
Asked why these changes are being made, Sirish Chandrasekaran, Redshiftโs vice-president, said the additional security defaults will help customers adhere to best practices in data security from day one without requiring additional setup, reducing the risk of potential misconfigurations.
Out of the box, Redshift comes with a number of security capabilities, including support for multi-factor authentication, encryption for data at rest, access control and identity management and federation for single sign-on. But these and other tools are useless unless they are used and properly configured.
Recommendations
In a series of blogs, Palo Alto Networks made a number of recommendations to Redshift admins:
- make sure they know exactly which users and roles have the ability to use the command โredshift:GetClusterCredentialsโ and โredshift-data.โ An attacker can use this command to generate temporary credentials to access a Redshift cluster;
- Redshift admins can create users and groups and assign them only the privileges they need. Admins should create a user per identity, so in the event of a security incident, itโll be possible to track and monitor what data was accessed, or even prevent unauthorized access before it happens;
- because Redshift can authorize identity and access roles that allow a data cluster to access external data sources, make sure these roles are limited to only those who need this access.
Palo Alto Networks also provided more detail on access control in a blog post, and Sysdig offered this advice on Redshift security best practices.
Finally, Sysdigโsย Degioanni provided this additional caution: โWhile cloud providers provide a level of security for their underlying infrastructure and services, organizations remain responsible for protecting their own applications and data under a shared responsibility model. Because attacks can happen quickly โ sometimes within minutes โ having real-time visibility into production workloads is crucial for detecting active risk and responding to threats as they occur.โ


