Dark Mode
Executing CQL statements

The sample application executes statements to create a schema and insert some data into the database.

Prerequisites

  • An Astra db with a demo keyspace.
  • Credentials and the secure bundle file.

Procedure

Retrieving metadata for the cluster is good, but you also want to be able to read and write data to the cluster. The driver lets you execute CQL statements using a session instance that you retrieve from the cluster object. You will add code to your client for:

  • creating tables
  • inserting data into those tables

Creating a table

  1. Drop the table if it exists.
  2. Create the table.

Loading data

Execute an INSERT statement against the session.

Result

On the Astra dashboard, connect to the database and select the CQL Console tab.

Issue these two cqlsh commands:

  1. USE demo;
  2. DESCRIBE KEYSPACE demo;

This result is printed out:

CREATE KEYSPACE demo WITH replication = {'class': 'NetworkTopologyStrategy', 'us-east1': '3'} AND durable_writes = true;

CREATE TABLE demo.playlists (
id uuid,
title text,
album text,
artist text,
song_id uuid,
PRIMARY KEY (id, title, album, artist)
) WITH CLUSTERING ORDER BY (title ASC, album ASC, artist ASC)
AND additional_write_policy = '99PERCENTILE'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.UnifiedCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99PERCENTILE';

CREATE TABLE demo.songs (
id uuid PRIMARY KEY,
album text,
artist text,
data blob,
tags set<text>,
title text
) WITH additional_write_policy = '99PERCENTILE'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.UnifiedCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99PERCENTILE';