Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 5 hours 25 min ago

Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift

Fri, 2018-05-18 13:06

In the last two posts we deployed an EDB database container and two pgpool instances and then scaled that up to include a read only replica. In this post will use a ConfigMap to adjust parameters in postgresql.conf as you will probably need to do that when you start using the EDB containers in your environment.

A ConfigMap is an object that can be used to provide parameter/values pairs to the container which then will be added to postgresql.conf file of the database containers. Creating a ConfigMap is quite easy, all you need to do is to create a file called “postgresql.conf.in” which lists all the parameters you want to get adjusted:

dwe@dwe:~$ cat /opt/ConfigMaps/postgresql.conf.in
work_mem='12MB'
shared_buffers='56MB'

In that case we want to adjust work_mem and shared_buffers, that’s it. To load that into OpenShift by using the oc command line utility:

dwe@dwe:~$ oc create configmap postgres-map --from-file=/opt/ConfigMaps/postgresql.conf.in
configmap "postgres-map" created
11:01:22 dwe@dwe:~$ oc get configmaps postgres-map
NAME           DATA      AGE
postgres-map   1         12m
dwe@dwe:~$ oc get configmaps postgres-map -o yaml
apiVersion: v1
data:
  postgresql.conf.in: |+
    work_mem='12MB'
    shared_buffers='56MB'

kind: ConfigMap
metadata:
  creationTimestamp: 2018-05-18T08:49:35Z
  name: postgres-map
  namespace: myproject
  resourceVersion: "16618"
  selfLink: /api/v1/namespaces/myproject/configmaps/postgres-map
  uid: 63c3a154-5a78-11e8-992f-ca15bcd30222

The issue is now that our current template does not know anything about that ConfigMap. So either adjust it or create a new one like this (changes are highlighted):

cat edb-as10-0-edb-cust.yaml
apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/edb/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres' 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/edb/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
             
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Once you imported that into OpenShift (check here if you don’t know how to do that) you get a new template you can deploy from:

Selection_025

When you create a new deployment of that one (again, check here if you don’t know how to do that) you will notice several things when you login to the container once it is up and running:

dwe@dwe:~$ oc rsh edb-as10-0-1-gk8dt
sh-4.2$ psql postgres
psql.bin (10.3.8)
Type "help" for help.

postgres=# show work_mem;
 work_mem 
----------
 12MB
(1 row)

postgres=# show shared_buffers ;
 shared_buffers 
----------------
 56MB
(1 row)

First of all and this is what we wanted: The PostgreSQL instance came up with the parameters we specified in the ConfigMap. When you look at the volumes present in the container there is a new one named after what we specified in the template:

sh-4.2$ df -h
Filesystem      Size  Used Avail Use% Mounted on
none             18G  4.0G   14G  24% /
tmpfs          1002M     0 1002M   0% /dev
tmpfs          1002M     0 1002M   0% /sys/fs/cgroup
/dev/sda1        18G  4.0G   14G  24% /initconf
shm              64M   12K   64M   1% /dev/shm
tmpfs          1002M   16K 1002M   1% /run/secrets/kubernetes.io/serviceaccount

Inside that volume there is the postgresql.conf.in file we also specified in the template and that is linked to $PGDATA:

sh-4.2$ ls -la /initconf
total 12
drwxrwsrwx  3 root 1000070000 4096 May 18 09:55 .
drwxr-xr-x 85 root root       4096 May 18 09:55 ..
drwxr-sr-x  2 root 1000070000 4096 May 18 09:55 ..2018_05_18_09_55_19.162613490
lrwxrwxrwx  1 root root         31 May 18 09:55 ..data -> ..2018_05_18_09_55_19.162613490
lrwxrwxrwx  1 root root         25 May 18 09:55 postgresql.conf.in -> ..data/postgresql.conf.in

And finally we can confirm the content of that file:

sh-4.2$ cat /initconf/postgresql.conf.in 
work_mem='12MB'
shared_buffers='56MB'

You can do the same for pg_hba.conf by creating a new ConfigMap for pg_hba.conf.in. In the next post we’ll look at how EDB Failover Manager is configured inside the containers.

 

Cet article Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

Scaling the EDB containers in MiniShift/OpenShift

Fri, 2018-05-18 03:21

When you followed the last post you should have two pgpool instances and one EDB Postgres instance running in OpenShift. pgpool is responsible for the load balancing, meaning: Send write requests to the master instance and spread read requests over all instances. In the current setup this does not make much sense as we only have one instance so in this post we will scale the setup to finally consist of one master and one read only replica container.

Lets check what pods are currently running:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-vldkj   1/1       Running   2          5d
edb-pgpool-1-699vh   1/1       Running   2          5d
edb-pgpool-1-nsgrm   1/1       Running   2          5d

As expected, two pgpool containers and one database container are up an running. When we login to one of the pgpool container we should see the pgpool processes:

dwe@dwe:~$ oc rsh edb-pgpool-1-699vh
sh-4.2$ ps -ef | grep pool
edbuser     63     0  0 15:27 ?        00:00:00 pgpool -D -d -f /etc/sysconfig/edb/pgpool3.5/pgpool.conf -a /etc/sysconfig/edb/pgpool3.5/pool_hba.conf
edbuser     64    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     65    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     66    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     67    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     68    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     69    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     70    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     71    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     72    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     73    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     74    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     75    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     76    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     77    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     78    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     79    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     80    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     81    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     82    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     83    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     84    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     85    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     86    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     87    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     88    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     89    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     90    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     91    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     92    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     93    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     94    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     95    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     96    63  0 15:27 ?        00:00:00 pgpool: PCP: wait for connection request
edbuser     97    63  0 15:27 ?        00:00:00 pgpool: worker process
edbuser    365   342  0 15:30 ?        00:00:00 grep pool

What I am interested in is the backend configuration. As we do have only one database container there should be only one backend configured:

sh-4.2$ cat /etc/sysconfig/edb/pgpool3.5/pgpool.conf | grep backend | egrep -v "^ |^#"
backend_hostname0 = '172.17.0.7'
backend_port0 = 5444
backend_weight0 = 1
backend_data_directory0 = '/var/lib/edb/as9.6/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
								   # but also for ordinary conection to backend.
fail_over_on_backend_error = off

Fine, this is what is expected. The same is of course true for the other container. The question is: We have two pgpool instances running against one database instance, how can we connect then? What in addition happened when we deployed the setup is that a service was created:

dwe@dwe:~$ oc get svc -o wide
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP                     PORT(S)          AGE       SELECTOR
edb-service   LoadBalancer   172.30.162.55   172.29.228.247,172.29.228.247   5444:30726/TCP   5d        lb=edb-pgpool

This services proxies the request to one of the pgpool instances which then routes the connection to the database instance. Lets try to connect from outside:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U postgres -W
Password for user postgres: 
psql: FATAL:  md5 authentication failed
DETAIL:  pool_passwd file does not contain an entry for "postgres"

Hm. Looking at the available databases and users in the database containers:

dwe@dwe:~$ oc rsh edb-as10-0-1-vldkj
sh-4.2$ psql edb
psql.bin (10.3.8)
Type "help" for help.

edb=# \l
                                 List of databases
   Name    |    Owner     | Encoding  | Collate | Ctype | ICU |  Access privileges  
-----------+--------------+-----------+---------+-------+-----+---------------------
 edb       | enterprisedb | SQL_ASCII | C       | C     |     | 
 postgres  | edbuser      | SQL_ASCII | C       | C     |     | 
 template0 | edbuser      | SQL_ASCII | C       | C     |     | =c/edbuser         +
           |              |           |         |       |     | edbuser=CTc/edbuser
 template1 | edbuser      | SQL_ASCII | C       | C     |     | =c/edbuser         +
           |              |           |         |       |     | edbuser=CTc/edbuser
(4 rows)

edb=# \du
                                         List of roles
       Role name       |                         Attributes                         | Member of 
-----------------------+------------------------------------------------------------+-----------
 aq_administrator_role | No inheritance, Cannot login                              +| {}
                       | Profile default                                            | 
 edbuser               | Superuser, Create role, Create DB, Replication, Bypass RLS+| {}
                       | Profile default                                            | 
 enterprisedb          | Superuser                                                 +| {}
                       | Profile default                                            | 
 repl                  | Replication                                               +| {}
                       | Profile default                                            | 

… it becomes clear: There is no user postgres. Using the edbuser it works fine:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# 

How can we verify that the service actually is routing the connection to both pgpool instances? PostgreSQL comes with a set of administration functions and one of those can be used to verify that.

Do a first connection and ask for the client address (the client is one of the pgpool instances in this case):

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# select inet_client_addr();
 inet_client_addr 
------------------
 172.17.0.6
(1 row)

Do a second one:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# select inet_client_addr();
 inet_client_addr 
------------------
 172.17.0.7
(1 row)

postgres=# 

This two IP addresses are my two pgpool instances so this confirms that the service is behaving as expected. As having a single instance in that setup is somehow useless lets scale the setup by adding another database container. We’ll be doing that with the console for now:

Selection_022

After a few seconds another pod is running:
Selection_023

What I expect is that the pgpool instances got re-configured to include the new backend. Looking at the pods:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-frc99   1/1       Running   0          15m
edb-as10-0-1-gkpgq   1/1       Running   0          31m
edb-pgpool-1-lvwsq   1/1       Running   0          31m
edb-pgpool-1-nh4qb   1/1       Running   0          31m

Lets check the configuration of the first pgpool instance:

dwe@dwe:~$ oc rsh edb-pgpool-1-lvwsq
sh-4.2$ ps -ef | grep conf
edbuser   1918     0  0 07:43 ?        00:00:00 pgpool -D -d -f /etc/sysconfig/edb/pgpool3.5/pgpool.conf -a /etc/sysconfig/edb/pgpool3.5/pool_hba.conf
edbuser   2508  2483  0 07:48 ?        00:00:00 grep conf
sh-4.2$ cat /etc/sysconfig/edb/pgpool3.5/pgpool.conf | grep backend | egrep -v "^ |^#"
backend_hostname0 = '172.17.0.7'
backend_port0 = 5444
backend_weight0 = 1
backend_data_directory0 = '/var/lib/edb/as9.6/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
								   # but also for ordinary conection to backend.
fail_over_on_backend_error = off
backend_hostname1 = '172.17.0.5'
backend_port1 = 5444
backend_weight1 = 1
backend_data_directory1 = '/data1'
backend_flag1 = 'DISALLOW_TO_FAILOVER'
sh-4.2$ 

Here we go, works as expected. What is a bit strange is that $PGDATA on the second backend is in /data1 where it is in /var/lib/edb/as9.6/data on the first backend. From now on read only queries should land on either the slave or the master node node and statements that modify data should land on the master node, lets check what is master and what is replica:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "show pool_nodes" postgres
Password for user edbuser: 
 node_id |  hostname  | port | status | lb_weight |  role   | select_cnt 
---------+------------+------+--------+-----------+---------+------------
 0       | 172.17.0.7 | 5444 | 2      | 0.500000  | primary | 1
 1       | 172.17.0.5 | 5444 | 2      | 0.500000  | standby | 0
(2 rows)

When I do two read only statements I should be round-robined:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select inet_server_addr()" postgres
Password for user edbuser: 
 inet_server_addr 
------------------
 172.17.0.7
(1 row)

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select inet_server_addr()" postgres
Password for user edbuser: 
 inet_server_addr 
------------------
 172.17.0.5
(1 row)

Disabling load balancing should bring me to the master:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select /*NO LOAD BALANCE*/ inet_server_addr()" postgres
Password for user edbuser: 
 inet_server_addr 
------------------
 172.17.0.5
(1 row)

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select /*NO LOAD BALANCE*/ inet_server_addr()" postgres
Password for user edbuser: 
 inet_server_addr 
------------------
 172.17.0.5
(1 row)

Works quite fine. In the next post we’ll add ConfigMaps to customize our deployment. Usually you want to pre-configure the deployment so it fits for your requirements and that is what ConfigMaps are for.

 

Cet article Scaling the EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users

Tue, 2018-05-15 16:24

Mike Dietrich has blogged recently about upuserxt.lst and upobjxt.lst and how to query them with external table. The first time I’ve seen those ‘.lst’ files, the default extension for sqlplus spool files, I wondered whether they were provided in ?/rdbms/admin on purpose, or if they were just some leftovers from some tests Oracle did before packaging the Oracle Home. Finally, I realized that they were there on purpose and that those ‘.lst’ are important files when upgrading to 12c.

I’ll look at an 18c Oracle Home (/rdbms/admin) in the Oracle Cloud but that applies to all 12c (and 18c is a 12cR2 patchset). One of the most important little feature of 12c is the tagging of Oracle Supplied objects and users. Before 12c it was a nightmare to distinguish system users from application ones. I detailed that in a previous post.

At database creation: _oracle_script

In a newly created 12c database, all the objects and users belonging to the system are flagged with ORACLE_MAINTAINED=Y

Here is an example listing system users and roles:

SQL> select listagg(username,',' on overflow truncate) within group (order by username) from dba_users where oracle_maintained='Y';LISTAGG(USERNAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYUSERNAME)
 
ANONYMOUS,APPQOSSYS,AUDSYS,CTXSYS,DBSFWUSER,DBSNMP,DIP,DVF,DVSYS,GGSYS,GSMADMIN_INTERNAL,GSMCATUSER,GSMUSER,LBACSYS,MDDATA,MDSYS,OJVMSYS,OLAPSYS,ORACLE_OCM,ORDDATA,ORDPLUGINS,ORDSYS,OUTLN,REMOTE_SCHEDULER_AGENT,SI_INFORMTN_SCHEMA,SYS,SYS$UMF,SYSBACKUP,SYSDG,SYSKM,SYSRAC,SYSTEM,WMSYS,XDB,XS$NULL
 
SQL> select listagg(role,',' on overflow truncate) within group (order by role) from dba_roles where oracle_maintained='Y';
LISTAGG(ROLE,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYROLE)
 
ADM_PARALLEL_EXECUTE_TASK,APPLICATION_TRACE_VIEWER,AQ_ADMINISTRATOR_ROLE,AQ_USER_ROLE,AUDIT_ADMIN,AUDIT_VIEWER,AUTHENTICATEDUSER,CAPTURE_ADMIN,CDB_DBA,CONNECT,CTXAPP,DATAPATCH_ROLE,DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,DBA,DBFS_ROLE,DBJAVASCRIPT,DBMS_MDX_INTERNAL,DV_ACCTMGR,DV_ADMIN,DV_AUDIT_CLEANUP,DV_DATAPUMP_NETWORK_LINK,DV_GOLDENGATE_ADMIN,DV_GOLDENGATE_REDO_ACCESS,DV_MONITOR,DV_OWNER,DV_PATCH_ADMIN,DV_POLICY_OWNER,DV_PUBLIC,DV_REALM_OWNER,DV_REALM_RESOURCE,DV_SECANALYST,DV_STREAMS_ADMIN,DV_XSTREAM_ADMIN,EJBCLIENT,EM_EXPRESS_ALL,EM_EXPRESS_BASIC,EXECUTE_CATALOG_ROLE,EXP_FULL_DATABASE,GATHER_SYSTEM_STATISTICS,GDS_CATALOG_SELECT,GGSYS_ROLE,GLOBAL_AQ_USER_ROLE,GSMADMIN_ROLE,GSMUSER_ROLE,GSM_POOLADMIN_ROLE,HS_ADMIN_EXECUTE_ROLE,HS_ADMIN_ROLE,HS_ADMIN_SELECT_ROLE,IMP_FULL_DATABASE,JAVADEBUGPRIV,JAVAIDPRIV,JAVASYSPRIV,JAVAUSERPRIV,JAVA_ADMIN,JMXSERVER,LBAC_DBA,LOGSTDBY_ADMINISTRATOR,OEM_ADVISOR,OEM_MONITOR,OLAP_DBA,OLAP_USER,OLAP_XS_ADMIN,OPTIMIZER_PROCESSING_RATE,ORDADMIN,PDB_DBA,PROVISIONER,RDFCTX_ADMIN,RECOVERY_CATALOG_OWNER,RECOVERY_CATALOG_OWNER_VPD,RECOVERY_CATALOG_USER,RESOURCE,SCHEDULER_ADMIN,SELECT_CATALOG_ROLE,SODA_APP,SYSUMF_ROLE,WM_ADMIN_ROLE,XDBADMIN,XDB_SET_INVOKER,XDB_WEBSERVICES,XDB_WEBSERVICES_OVER_HTTP,XDB_WEBSERVICES_WITH_PUBLIC,XS_CACHE_ADMIN,XS_CONNECT,XS_NAMESPACE_ADMIN,XS_SESSION_ADMIN

And here is an exemple listing the owners of system objects flagged with ORACLE_MAINTAINED=Y

SQL> select listagg(num||' '||owner,',' on overflow truncate) within group (order by num) from (select owner,count(*) num from dba_objects where oracle_maintained='Y' group by owner);LISTAGG(NUM||''||OWNER,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYNUM)
 
6 APPQOSSYS,8 DBSFWUSER,8 ORACLE_OCM,8 SI_INFORMTN_SCHEMA,10 ORDPLUGINS,10 OUTLN,13 REMOTE_SCHEDULER_AGENT,22 DVF,24 OJVMSYS,25 OLAPSYS,35 AUDSYS,55 DBSNMP,209 GSMADMIN_INTERNAL,239 LBACSYS,292 ORDDATA,398 DVSYS,399 WMSYS,412 CTXSYS,466 SYSTEM,1029 XDB,2574 MDSYS,3171 ORDSYS,12173 PUBLIC,51069 SYS

How this is done? That’s easy. All system objects are created by Oracle scripts, such as those called by catalog.sql and catproc.sql during database creation. Those scripts set “_oracle_script”=true before running the DDL and all object created while “_oracle_script”=true is flagged as Oracle Maintained.

If, in a lab (not in prod), you create your own object in the same way, they will also be flagged as Oracle Maintained:

SQL> connect / as sysdba
Connected.
SQL> alter session set "_oracle_script"=true;
Session altered.
 
SQL> create user FRANCK identified by myself;
User FRANCK created.
SQL> alter user FRANCK quota unlimited on users;
User FRANCK altered.
 
SQL> create table FRANCK.DEMO(n primary key) as select 1 from dual;
Table FRANCK.DEMO created.
SQL> alter session set "_oracle_script"=false;
Session altered.
 
SQL> select username,oracle_maintained from dba_users where username='FRANCK';
 
USERNAME ORACLE_MAINTAINED
-------- -----------------
FRANCK Y
SQL> select owner,object_name,object_type,oracle_maintained from dba_objects where owner='FRANCK';
 
OWNER OBJECT_NAME OBJECT_TYPE ORACLE_MAINTAINED
------ ---- ----- -
FRANCK DEMO TABLE Y
FRANCK SYS_C007409 INDEX Y

So, this one is easy. Database creation runs with “_oracle_script”=true and objects and users created when this parameter is set to true are flagged as Oracle Supplied objects.

And during upgrades?

When you upgrade from 11g to 12c you don’t have this Oracle Maintained information. The catupgrd runs with “_oracle_script”=true but this script does not create all objects. However the upgraded database has all system objects flagged as Oracle Maintained. This is where upuserxt.lst upobjxt.lst are used.

When Oracle developers build a new Oracle Home to be shipped, they create a database (including all options I suppose) and then run the utlupox.sql script. This script will list all Oracle Maintained users and objects, just relying on the flag that has been set during creation, and spool to the upuserxt.lst upobjxt.lst files. And those files will be shipped in the Oracle Home (all that under /rdbms/admin).

These .lst files will be used when upgrading from pre-12c in order to set the flags for Oracle Maintained objects. The external tables SYS.USERXT on upuserxt.lst and SYS.OBJXT on upobjxt.lst are created by catupcox.sql and, finally, those tables are read by catuposb.sql to set Oracle Maintained in USER$ and OBJ$. The catuposb.sql is a bit more complex that that because there are objects that can have different name when a database is created.

Note that this information about Oracle Maintained objects, in addition to being very useful for us, is crucial when you further convert the non-CDB to a PDB because those will become metadata links.

 

Cet article 12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users est apparu en premier sur Blog dbi services.

Configuring WebLogic Server 12.2.1.3 Diagnostics Framework policies and actions

Tue, 2018-05-15 09:28

Configure a Policy-action (new name for watch-notification) to send a JMX notification when the monitored WebLogic Server is having stuck threads or high memory usage. We will use a simple JMX notification listener program which can process WLDF JMX notifications. In this case, it will simply print the contents of the notification.
For simplicity, we will have only two servers in the domain, AdminServer (admin-server) and Server1 (managed-server). We will configure WLDF on the Server1 to send a JMX notification when the Heap Free of the managed-server Server1 (actually, any server it is targeted to within the domain) is less than 20% or as soon as there is a Stuck thread.
We will use WebLogic Server Administration Console to configure WLDF. It can also be done with WLST. Console provides a nice feature to Create Repeatable Configuration Scripts, which can be handy for building such scripts. Configuration steps for this case consist of:
1. Create a Diagnostic Module.
2. Enable periodic metrics collection in the Diagnostic module.
3. Create a JMX notification type.
4. Create a policy to detect if the WebLogic Server is having stuck threads
5. Create a policy to detect high memory usage on the WebLogic Server
6. Target the Diagnostic module to the WebLogic Server server1
We will go over these steps and see how to configure WLDF using Console for this task.

1. Create a Diagnostic Module.

a) Log into Console and acquire Edit Lock so we can add new configuration to WebLogic Server.
b) From the left navigation tree, open the Diagnostics node and click on Diagnostic Modules. Console will show a table of existing Diagnostic modules within the domain. Click the New button to create a new diagnostic module. Call it myWLDF. Click OK to create the module. At this point, we have an empty myWLDF diagnostic module.

2. Enable periodic metrics collection in the Diagnostic module.

a) Click on the myWLDF module link in the table of Diagnostics modules.
b) Click on Collected Metrics sub-tab under Configuration tab.
c) Check the Enabled checkbox and set the Sampling Period to 10000 (10 seconds). Click Save.

3. Create a JMX notification type.

a) Configuring a policy/actions (Watch and Notifications in earlier versions) has two aspects. The first aspect is a policy (watch rule) which specifies the condition that WLDF will check. The second aspect is the set of actions (notifications) that will be sent when the rule condition is met. Console provides configuration assistants to make the configuration task easier. To create an action type:
b) Click Policies and Actions sub-tab under Configuration tab.
On the Actions sub-tab, click New in the Actions table.
c) Select “JMX Notification” for the notification type from the drop down list and click Next.
d) Give a name to the notification type (myJMX)
e) Keep the Notification Type to its default value
f) Check the Enable Notification checkbox and click OK to create the notification type.

4. Create a policy to detect servers having stuck threads

Now, we will create the policy rule based on runtime mbean data. Specifically, we will use the StuckThreadCount attribute on the WorkManagerRuntime mbeans. For each server within the domain, there is a WorkManagerRuntime mbean in the domain runtime mbean server. The StuckThreadCount attribute reflects the current number of stuck thread on the server. We will configure a rule which will fire as soon one server in the targets list is having a stuck thread.
a) Click on the Policies sub-tab -> Configuration. Click New in the Policies table.
b) Set Policy Name to WatchServerStuckThread. Select Collected Metrics for Policy Type, check Enabled checkbox and click Next.
c) Set the Policy Expression as below and Click Next:

wls.runtime.query('com.bea:Type=WorkManagerRuntime,*','StuckThreadCount').stream().anyMatch( x -> x > 0 )

Note that the syntax of Policy Expressions has changed from the previous WebLogic Versions. Starting from this version, the policy expression is to be provided in Java Expression Language (EL).

d) Keep The Frequency to “Every N Seconds” and Click Next
e) Set the Repeat parameter to 5 and click Next
f) In this wizard keep the default and click Next
g) In the diagnostic Actions part, in the available actions, select the myJMX created earlier and move it to the “chosen” actions.
h) Click Finish

5. Create a policy to detect high memory usage on the WebLogic Server

Now, we will create the policy rule based on runtime mbean data. Specifically, we will use the StuckThreadCount attribute on the WorkManagerRuntime mbeans. For each server within the domain, there is a WorkManagerRuntime mbean in the domain runtime mbean server. The StuckThreadCount attribute reflects the current number of stuck thread on the server. We will configure a rule which will fire as soon one server in the targets list is having a stuck thread.
a) Click on the Policies sub-tab -> Configuration. Click New in the Policies table.
b) Set Policy Name to WatchServerStuckThread. Select Collected Metrics for Policy Type, check Enabled checkbox and click Next.
c) Set the Policy Expression as below and Click Next:

wls.runtime.serverRuntime.JVMRuntime.heapFreePercent < 20

d) Keep The Frequency to “Every N Seconds” and Click Next
e) Set the Repeat parameter to 5 and click Next
f) In this wizard keep the default and click Next
g) In the diagnostic Actions part, in the available actions, select the myJMX created earlier and move it to the “chosen” actions.
h) Click Finish
6. Target the Diagnostic module to the WebLogic Server server1
a) Click on the Targets TAB
b) In the list of possible Targets, select the Server1 and click on Save
c) Activate Changes

Receiving Notifications

WLDF sends a JMX notification on a specific WLDF runtime mbean, whose ObjectName is of the form:
com.bea:Name=DiagnosticsJMXNotificationSource,ServerRuntime=$SERVER,Type=WLDFWatchJMXNotificationRuntime,WLDFRuntime=WLDFRuntime,WLDFWatchNotificationRuntime=WatchNotification
where $SERVER is the name of the WebLogic Server instance. For our case (Server1), it is:
com.bea:Name=DiagnosticsJMXNotificationSource,ServerRuntime=Server1,Type=WLDFWatchJMXNotificationRuntime,WLDFRuntime=WLDFRuntime,WLDFWatchNotificationRuntime=WatchNotification
By registering for JMX notifications on this mbean, a client program can listen to generated notifications.
We will use the JMXWatchNotificationListener.java provided in the Oracle WLDF documentation (see references). It is a simple notification listener for WLDF JMX notifications. It simply prints the contents of received notification, but can be easily adapted to perform other actions.

A sample Java code of such listener can be downloaded from the Oracle WebLogic Diagnostic Framework documentation (here)

To run it for this blog sample run:

java JMXWatchNotificationListener vm01 7006 weblogic Welcome1 Server1

Note: The WebLogic Managed Server named Server1 is listening on port 7006.

Sample WLDF message sent when the WebLogic Server is having Stuck Threads

Notification name: myJMXNotif called. Count= 79.
Watch severity: Notice
Watch time: Apr 24, 2018 12:08:35 PM CEST
Watch ServerName: Server1
Watch RuleType: Harvester
Watch Rule: wls.runtime.query('com.bea:Type=WorkManagerRuntime,*','StuckThreadCount').stream().anyMatch( x -> x > 0 )
Watch Name: WatchServerStuckThread
Watch DomainName: wldf_domain
Watch AlarmType: None
Watch AlarmResetPeriod: 60000

Sample WLDF message sent when the WebLogic Server heap free becomes low (less than 20%)

Notification name: myJMXNotif called. Count= 114.
Watch severity: Notice
Watch time: Apr 24, 2018 12:11:45 PM CEST
Watch ServerName: Server1
Watch RuleType: Harvester
Watch Rule: wls.runtime.serverRuntime.JVMRuntime.heapFreePercent < 20
Watch Name: WatchServerLowHeapFreePercent
Watch DomainName: wldf_domain
Watch AlarmType: None
Watch AlarmResetPeriod: 60000

 

Cet article Configuring WebLogic Server 12.2.1.3 Diagnostics Framework policies and actions est apparu en premier sur Blog dbi services.

PDB RMAN backups available after plugging in on a new CDB with Oracle 18c

Tue, 2018-05-15 08:53

With Oracle 18c, it is possible to use PDB rman backups created on the source CDB (they are called PREPLUGIN backups) when the PDB has been relocated to a target CDB.

In my environment, my original CDB is DB18, with the PDB named pdborig. The target CDB is PSI18.

The first step consist in running a rman backup on pdborig:

oracle@localhost:/u00/app/oracle/ [DB18] rman target sys/manager@pdborig
Recovery Manager: Release 18.0.0.0.0 - Production on Tue May 15 10:57:38 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB18:PDBORIG (DBID=3031125269)

RMAN> backup pluggable database pdborig plus archivelog;

Starting backup at 15-MAY-2018 10:57:55
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=80 device type=DISK
skipping archived logs when connected to a PDB
backup cancelled because there are no files to backup
Finished backup at 15-MAY-2018 10:57:56

Starting backup at 15-MAY-2018 10:57:56
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00029 name=/u00/app/oracle/oradata/DB18/pdborig/undotbs01.dbf
input datafile file number=00027 name=/u00/app/oracle/oradata/DB18/pdborig/system01.dbf
input datafile file number=00028 name=/u00/app/oracle/oradata/DB18/pdborig/sysaux01.dbf
input datafile file number=00030 name=/u00/app/oracle/oradata/DB18/pdborig/users01.dbf
channel ORA_DISK_1: starting piece 1 at 15-MAY-2018 10:57:56
channel ORA_DISK_1: finished piece 1 at 15-MAY-2018 10:57:59
piece handle=/u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/
2018_05_15/o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp 
tag=TAG20180515T105756 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 15-MAY-2018 10:57:59

Starting backup at 15-MAY-2018 10:57:59
using channel ORA_DISK_1
skipping archived logs when connected to a PDB
backup cancelled because there are no files to backup
Finished backup at 15-MAY-2018 10:57:59

Then we have to export the RMAN backup metadata for the non CDB into its dictionary using dbms_pdb.exportrmanbackup()

oracle@localhost:/u00/app/oracle/ [DB18] sqlplus sys/manager@pdborig as sysdba

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:00:38 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> execute dbms_pdb.exportrmanbackup();

PL/SQL procedure successfully completed.

As my environment is configured with TDE, in order to unplug the pdborig, we need to export the master key of the container database otherwise we will receive the following error message:

SQL> alter pluggable database pdborig 
unplug into '/home/oracle/pdborig.xml';
alter pluggable database pdborig unplug into '/home/oracle/pdborig.xml'
*
ERROR at line 1:
ORA-46680: master keys of the container database must be exported

We export the master key:

SQL> alter session set container=PDBORIG;
Session altered

SQL> administer key management 
  2  export encryption keys with secret "manager_cdb" 
  3  to '/home/oracle/pdborig.p12'
  4  identified by manager_cdb;

keystore altered.

SQL> alter pluggable database PDBORIG close immediate;

Pluggable database altered.

SQL> alter pluggable database PDBORIG unplug into '/home/oracle/pdborig.xml';

Pluggable database altered.

Finallly on the target CDB named PSI18, we first have to create a wallet and 
open the keystore. Just remember you have to define wallet_root and 
tde_configuration in your CDB environment in order to use TDE:


SQL> show parameter wallet

NAME		       TYPE	    VALUE
-------------------- ----------- -------------------------------------
wallet_root	       string	 /u00/app/oracle/admin/PSI18/walletcdb

SQL> alter system set tde_configuration="keystore_configuration=file";
System altered.

We create and open the keystore on the target CDB and we import the master key:

SQL> administer key management create keystore identified by manager_cdb;

keystore altered.

SQL> administer key management set keystore open 
identified by manager_cdb container=all;

keystore altered.

SQL> alter session set container=pdb1;

Session altered.

SQL> administer key management import encryption keys 
     with secret "manager_cdb" from '/home/oracle/pdborig.p12'
     identified by "manager_cdb" with backup;

keystore altered.

We create pdbnew on the target CDB using pdborig.xml:

SQL> create pluggable database pdbnew using '/home/oracle/pdborig.xml'
file_name_convert=
('/u00/app/oracle/oradata/DB18/pdborig','/home/oracle/oradata/PSI18/pdbnew');

Pluggable database created.

We open the pluggable database pdbnew:

SQL> alter pluggable database pdbnew open;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 4 PDBNEW			  READ WRITE NO

And now the non CDB PDBNEW has been plugged in the target CDB, we can ask if the rman backups are visible because we had exported the rman metadata backup. To visualize that we have to use the preplugin clause:

RMAN> list preplugin backup of pluggable database pdbnew;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
9       Full    463.15M    DISK        00:00:01     15-MAY-2018 10:56:51
        BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105650
        Piece Name: /u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105650_fho86ltx_.bkp
  List of Datafiles in backup set 9
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
10      Full    463.15M    DISK        00:00:01     15-MAY-2018 10:57:57
        BP Key: 10   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105756
        Piece Name: /u00/app/oracle/fast_recovery_area/DB18/DB18/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
  List of Datafiles in backup set 10
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

We can also display the archive logs preplugin backups:

RMAN> list preplugin archivelog all;

List of Archived Log Copies for database with db_unique_name PSI18
=====================================================================

Key     Thrd Seq     S Low Time            
------- ---- ------- - --------------------
.....

6       1    16      A 15-MAY-2018 10:08:53
/u00/app/oracle/fast_recovery_area/archivelog/2018_05_15/o1_mf_1_16_fho5r944_.a
...

So let’s see if we can make a restore and recover test:

We delete the user01.dbf datafile:

oracle@localhost:/u00/app/oracle/ [PSI18] rm /home/oracle/oradata/PSI18/pdbnew/users01.dbf

oracle@localhost:/u00/app/oracle/oradata/DB18/pdbseed/ [PSI18] sq

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:20:47 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> alter pluggable database pdbnew close;

Pluggable database altered.

SQL> alter pluggable database pdbnew open;
alter pluggable database pdbnew open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 23 - see DBWR trace file

Now we try to restore: we connect with rman to the target CDB and we set the PDB that needs to be restored with the command set preplugin container=pdbnew:

oracle@localhost:/u00/app/oracle/ [PSI18] rman target sys/manager@psi18

Recovery Manager: Release 18.0.0.0.0 - Production on Tue May 15 11:25:06 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PSI18 (DBID=3896993764)

RMAN> set preplugin container=pdbnew;

executing command: SET PREPLUGIN CONTAINER
using target database control file instead of recovery catalog

RMAN> list preplugin backup of pluggable database pdbnew;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
9       Full    463.15M    DISK        00:00:01     15-MAY-2018 10:56:51
        BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105650
        Piece Name: /u00/app/oracle/fast_recovery_area/
/6C3BAD3B7C73354AE0530100007F9AD9/backupset/
2018_05_15/o1_mf_nnndf_TAG20180515T105650_fho86ltx_.bkp
  List of Datafiles in backup set 9
  Container ID: 4, PDB Name: PDBNEW
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
10      Full    463.15M    DISK        00:00:01     15-MAY-2018 10:57:57
        BP Key: 10   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105756
        Piece Name: /u00/app/oracle/fast_recovery_area
/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
  List of Datafiles in backup set 10
  Container ID: 4, PDB Name: PDBNEW
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

We run the restore command with the preplugin clause:

RMAN> restore pluggable database pdbnew from preplugin;

Starting restore at 15-MAY-2018 11:26:23
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=108 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00020 to 
/home/oracle/oradata/PSI18/pdbnew/system01.dbf
channel ORA_DISK_1: restoring datafile 00021 to 
/home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00022 to 
/home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00023 to 
/home/oracle/oradata/PSI18/pdbnew/users01.dbf
channel ORA_DISK_1: reading from backup piece /u00/app/oracle/fast_recovery_area/
DB18/DB18/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
channel ORA_DISK_1: piece handle=/u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp tag=TAG20180515T105756
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 15-MAY-2018 11:26:28

We run the recover command with the preplugin clause:

RMAN> recover pluggable database pdbnew from preplugin;

Starting recover at 15-MAY-2018 11:27:02
using channel ORA_DISK_1

starting media recovery

unable to find archived log
archived log thread=1 sequence=17
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 05/15/2018 11:27:03
RMAN-06054: media recovery requesting unknown archived log 
for thread 1 with sequence 17 and starting SCN of 1081326

We have to catalog the archive logs generated after the backup into the target CDB by issuing the catalog preplugin archivelog command :

RMAN> catalog preplugin archivelog '/u00/app/oracle/fast_recovery_area/
DB18/DB18/archivelog/2018_05_15/o1_mf_1_17_fhob69t7_.arc';

cataloged archived log
archived log file name=/u00/app/oracle/fast_recovery_area/DB18/DB18/
archivelog/2018_05_15/o1_mf_1_17_fhob69t7_.arc RECID=7 STAMP=0

Finally the recover command runs successfully:

RMAN> recover pluggable database pdbnew from preplugin;

Starting recover at 15-MAY-2018 11:32:25
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 17 is already on disk as file 
/u00/app/oracle/fast_recovery_area/DB18/DB18/archivelog/2018_05_15/
o1_mf_1_17_fhob69t7_.arc
media recovery complete, elapsed time: 00:00:00
Finished recover at 15-MAY-2018 11:32:26

We finish the recover and open the target PDB:

RMAN> recover pluggable database pdbnew;

Starting recover at 15-MAY-2018 11:33:10
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 15-MAY-2018 11:33:10

RMAN> alter pluggable database pdbnew open;

Statement processed

RMAN> exit


Recovery Manager complete.

As far we can see, the target PDB has been successfully restored and recovered:

oracle@localhost:/u00/app/oracle/oradata/DB18// [PSI18] sq

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:33:37 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 4 PDBNEW			  READ WRITE NO

The preplugin clause can be used on the target CDB with rman commands like restore, recover, crosscheck, list and delete.

This new feature helps to maintain the backup compliance after moving to a new target CDB.

 

Cet article PDB RMAN backups available after plugging in on a new CDB with Oracle 18c est apparu en premier sur Blog dbi services.

ADWC – connect from your premises

Mon, 2018-05-14 05:44

In the previous post about the Autonomous Data Warehouse Service, I’ve run queries though the Machine Learning Notebooks. But you obviously want to connect to it from your premises, with SQL*Net.

CaptureADWCconnect001Of course the connection, going through the public internet, must be secured. If you already use a managed service like the Oracle Exadata Express Cloud Service, you already know how to do: download a .zip containing the connection string and the wallet and certificate for SQL*Net encryption.

You get it from the Service Console, logged as the ADMIN user, and Administration tab. The Download Client Credentials asks you for the wallet password. However, this is not a password to protect the .zip file and the .zip file contains an auto-login wallet, so keep it secured.

SQL Developer

CaptureADWCconnect002 The simplest use of this file is with SQL Developer because you don’t even have to unzip it. Just choose a ‘Cloud PDB’ connection type, enter the path of the .zip file as Configuration File, the password as Keystore Password and the ADMIN user (or any user you have created with the Oracle ML Users).

In the tnsnames.ora provided in the .zip file there are 3 network service names connecting to 3 different services: _low, _medium and _high. They map to the resource manager plan so that you can run your queries with different priorities.

SQLcl thin

With SQLcl you do not need to unzip the credentials file, at least when you are using thin JDBC (the default).
You just register it with:

18:53:12 SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
Using temp directory:/tmp/oracle_cloud_config4174171941677611695

and you are ready to connect to the _low, _medium and _high services.

As you see, it unzips the file into a temporary directory so you have to do it each time you run SQLcl. You can add this to login.sql and may add some housekeeping as this temporary directory may remain. Or run all this in a docker container.

This is simple, at least if you are running the latest Java 8 which includes the Java Cryptography Extension (JCE). If it is not the case, as when you use the Java Home provided with 18c (1.8.0_152), you have to add the jars yourself. But don’t worry, all is explained:

SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
***** JCE NOT INSTALLED ****
***** CAN NOT CONNECT TO PDB Service without it ****
Current Java: /u01/app/oracle/product/18.0.0/dbhome_1/jdk/jre
Follow instructions on http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
SQL>

Of course the alternative is to install the latest Java

$ sudo rpm -i jre-8u171-linux-x64.rpm
Unpacking JAR files...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

and set JAVA_HOME to it before starting SQLcl

$ export JAVA_HOME=/usr/java/jre1.8.0_171-amd64
$ SQLPATH=~/sql bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql /nolog

Anyway, in all cases, once the credential .zip is provided you can connect with Thin JDBC (the default) with a user/password that has been created in the ADWC:

SQL> connect admin@adwc_high
Password? (**********?) ****************
AArray = [B@24959ca4
AArray = [B@10289886
AArray = [B@32115b28
AArray = [B@2ad48653
Connected.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
6 esj1pod6 12.2.0.1.0 13-MAY-18 OPEN YES 6 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE RAC

What’s in the .zip

The tnsnames.ora has entries for the low, medium, high services.

adwc_high = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_low = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_low.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_medium = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_medium.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )

The sqlnet.ora mentions the wallet used for SQL*Net encryption:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

Note that the directory is an absolute path and you probably want to change it to your TNS_ADMIN one where you unzip the file.

In the wallet location, you find the ewallet.p12 that contain the certificate and private keys, protected with password, and the cwallet.sso which do not need to provide the password to open it, so protect them with file permissions.

You find also keystore.jks which also contains the Self-signed certificate but in JKS truststore format. and referenced from ojdbc.properties properties:

oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN})))

Once you have unzipped the credentials, you can use them to connect with OCI.

SQL*Plus or SQLcl -oci

If you want to connect with an OCI client, you have to unzip this file to your TNS_ADMIN directory. That can be the $ORACLE_HOME/network/admin, but be careful to overwrite existing files, or it can be a new directory you will use by setting the TNS_ADMIN environment variable (or registry entry) to it.

Here are some examples where I set TNS_ADMIN to the directory where I unzipped the credentials:

TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlcl -oci /nolog
TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlplus /nolog

Any application using OCI (the oracle client, which can be the InstantClient or a full database installation) can use this without providing any password.

Databas Link

With the unzipped credentials you can access through OCI which means that you can also have a database link to the ADWC database. The credentials must be unzipped (or merged) in the TNS_ADMIN (or default ?/rnetwork/admin) of the instance:

SQL> create database link ADWC connect to ADMIN identified by "Ach1z0#dAch1z0#d" using 'adwc_high';
Database link created.
 
SQL> select banner from v$version@ADWC;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Client Credential password

When downloading the .zip you are asked for a password to ‘protect this file to prevent unauthorized database access‘. But that is misleading. The .zip is not password protected. Anyone can open it. And it contains an auto-login wallet, so anybody can use it. You can access the database without this password. Of course, you cannot connect if you don’t have a user/password with a create session privileges, but you access to it for user credentials verification.

So what is this password used for? We have seen that SQL Developer needs the password (or you will get a files as java.io.IOException: Keystore was tampered with, or password was incorrect). Then, you may remove the .sso auto-login wallet from the .zip file when it is used only by SQL Developer. But of course, you have to think about where the password is stored in SQL Developer. Is is more secured than the .sso ?

As long as the auto-login wallet is there, you do not need to store the wallet password. But of course, you will protect credential files.

 

Cet article ADWC – connect from your premises est apparu en premier sur Blog dbi services.

Deploying EDB containers in MiniShift/OpenShift

Fri, 2018-05-11 10:15

In this post we’ll look at how we can deploy EnterpriseDB containers in MiniShift. When you need to setup MiniShift have a look here. In this post we’ll do the setup with the MiniShift console, in a next post we’ll do the same by using the command line tools.

As a few containers will be running at the end MiniShift got more resources when it was started:

dwe@dwe:/opt$ minishift delete
dwe@dwe:/opt$ minishift start --cpus 4 --disk-size 30GB --memory 4GB

Once MiniShift is up and running open the MiniShift console and login as developer/admin:

dwe@dwe:/opt$ minishift console

Selection_001

The first thing we need to do is to grant the necessary permissions after we stepped into “My Project”:
Selection_002

The permission are in Resources->Membership. Add admin,edit and view to the default account:
Selection_004

For accessing the EnterpriseDB container repository a new secret needs to be created which contains the connection details. Secrets are under Resources->Secrets:
Selection_005
Selection_006

As databases are happy when they can store their data on persistent storage we need a volume. Volumes can be created under “Storage”:
Selection_007
Selection_008

Now we need a local registry where we can push the EnterpriseDB containers to:

dwe@dwe:~$ minishift ssh
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802
docker@minishift:~$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
Unable to find image 'registry:2' locally
2: Pulling from library/registry
81033e7c1d6a: Pull complete 
...
Status: Downloaded newer image for registry:2
14e85f4e2a36e727a0584803e49bbd690ffdb092c02238a241bd2ad003680625
docker@minishift:~$ docker login containers.enterprisedb.com
Username: dbi-services
Password: 
Login Succeeded
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-as:v10.3
v10.3: Pulling from test/edb-as
d9aaf4d82f24: Pulling fs layer 
...
Status: Downloaded newer image for containers.enterprisedb.com/test/edb-as:v10.3
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-as:v10.3 localhost:5000/test/edb-as:v10.3
docker@minishift:~$ docker push localhost:5000/test/edb-as:v10.3
The push refers to a repository [localhost:5000/test/edb-as]
274db5c4ff47: Preparing 
...
docker@minishift:~$ docker pull containers.enterprisedb.com/test/edb-pgpool:v3.5
v3.5: Pulling from test/edb-pgpool
...
docker@minishift:~$ docker tag containers.enterprisedb.com/test/edb-pgpool:v3.5 localhost:5000/test/edb-pgpool:v3.5
docker@minishift:~$ docker push localhost:5000/test/edb-pgpool:v3.5
The push refers to a repository [localhost:5000/test/edb-pgpool]
8a7df26eb139: Pushed 
...

This is all what is required for the preparation. The next step is to import to the template which specifies the setup. For this little demo we’ll use this one:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-0
   annotations:
    description: "Standard EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres' 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres' 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres' 
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: containers.enterprisedb.com/test/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

For importing that into OpenShift go to “Overview” and select “Import YAML/JSON”:
Selection_010
Selection_011
Selection_012

This imports the template but does not process it right now. When you go back to “Overview” you should see a new template which you can provision:
Selection_013
Selection_014

Selecting the new template brings you to the specification of the variables. The only bits you need to adjust are the values for the volume and the volume claim:
Selection_015
Selection_016

A few moments later the EDB containers are up and running:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-fdr5j   1/1       Running   0          1m
edb-pgpool-1-9twmc   1/1       Running   0          1m
edb-pgpool-1-m5x44   1/1       Running   0          1m

Current there are two pgpool instances and one database instance container. You can double check that the instance is really running with:

dwe@dwe:~$ oc rsh edb-as10-0-1-fdr5j
sh-4.2$ psql postgres
psql.bin (10.3.8)
Type "help" for help.

postgres=# select version();
                                                   version                                                   
-------------------------------------------------------------------------------------------------------------
 EnterpriseDB 10.3.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

Going back to the “Overview” page in the console shows the same information:
Selection_019

In the next post we’ll scale up the deployment by adding two replicas and configure access from outside the cluster.

 

Cet article Deploying EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

SP2 for SQL Server 2016 is available with new helpful DMVs

Fri, 2018-05-11 09:23

Last month (April 24, 2018), the Service Pack 2 for SQL Server 2016 was released and distributed.
This Service Pack has new DMVs, already available in SQL Server 2017 RTM.

In this article, I will just write few words about 2 DMVs (sys.dm_db_log_stats & sys.dm_db_log_info) and a new column (modified_extent_page_count) in the DMV sys.dm_db_file_space_usage that I presented during our last event about SQL Server 2017. I think they are really helpful for DBA.
It’s also the opportunity to present you the demo that I create for our Event.

Preparation

First, I create the database smart_backup_2016 and a table Herge_Heros

CREATE DATABASE [smart_backup_2016]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'smart_backup_2016', FILENAME = N'G:\MSSQL\Data\smart_backup_2016.mdf' )
 LOG ON
( NAME = N'smart_backup_2016_log', FILENAME = N'G:\MSSQL\Log\smart_backup_2016_log.ldf' )
GO

USE smart_backup_2016
GO

CREATE TABLE [dbo].[Herge_Heros]
   (
   [ID] [int] NULL,
   [Name] [nchar](10) NULL
   ) ON [PRIMARY]
GO

I do a little insert and run a first Full and a first TLog Backup

INSERT INTO [Herge_Heros] VALUES(1,'Tintin') -- Tim
INSERT INTO [Herge_Heros] VALUES(2,'Milou') -- Struppi


BACKUP DATABASE [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.bak' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO
BACKUP Log [smart_backup_2016] TO  DISK = N'C:\Temp\smart_backup.log' WITH NOFORMAT, NOINIT,  NAME = N'smart_backup-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

After, I insert a lot of line to have more than 50% modified pages

INSERT INTO [Herge_Heros] VALUES(3,'Quick') --Strups
INSERT INTO [Herge_Heros] VALUES(4,'Flupke')  --Stepppke
GO 100000

Now, the demo is ready!

new column modified_extent_page_count in sys.dm_db_file_space_usage

smart_backup01
As you can see in this screenshot, the column is really existing in SQL Server 2016 SP2 (13.0.5026.0).
After, you can, like us in our DMK maintenance, create an adapted Backup Strategy depending from changes and no more depending from the time.
In this stored procedure, if the modified pages are greater than 50% of the total pages, it will do a Full Backup and if the modified pages are less than 50%, it will do a Differential Backup.

USE [dbi_tools]
GO

CREATE or ALTER PROCEDURE [maintenance].[dbi_smart_backup] @database_name sysname
as
DECLARE @pages_changes Numeric(10,0)
DECLARE @full_backup_threshold INT
DECLARE @diff_backup_threshold INT
DECLARE @sql_query nvarchar(max)
DECLARE @page_change_text nvarchar(20)
DECLARE @param nvarchar(50)
DECLARE @backupfile nvarchar(2000)
SET @full_backup_threshold=50
SET @diff_backup_threshold=0
SET @param = N'@pages_changesOUT nvarchar(20) OUTPUT'
SET @sql_query =N'SELECT @pages_changesOUT=( 100 * Sum(modified_extent_page_count) / Sum(total_page_count) ) FROM ['+@database_name+'].sys.dm_db_file_space_usage'

EXECUTE sp_executesql @sql_query,@param ,@pages_changesOUT=@page_change_text OUTPUT; 
SET @pages_changes = CAST(@page_change_text AS Numeric(10,0)) 
IF @pages_changes > @full_backup_threshold
  BEGIN
     --Full Backup threshold exceeded, take a full backup
     Print 'Full Backup Threshold exceeded, take a full backup'
     SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.bak'
   BACKUP DATABASE @database_name TO DISK=@backupfile
  END
  ELSE
  BEGIN
	   IF @pages_changes >= @diff_backup_threshold
		BEGIN
			-- Diff Backup threshold exceeded, take a differential backup
			Print 'Diff Backup threshold exceeded, take a differential backup'
			SET @backupfile = N'C:\Temp\'+@database_name+N'_' + replace(convert(nvarchar(50), GETDATE(), 120), ':','_') + N'.dif'
			BACKUP DATABASE @database_name TO DISK=@backupfile WITH differential
		END
	ELSE
		BEGIN
			-- No threshold exceeded, No backup
		PRINT 'No threshold exceeded, No backup'   
		END
  END
GO

Now, I run the stored procedure [maintenance].[dbi_smart_backup] in the dbi_tool

USE smart_backup_2016;
GO
EXEC [dbi_tools].[maintenance].[dbi_smart_backup] @database_name = N'smart_backup_2016'

smart_backup02
The dbi backup Stored Procedure in this case do a Full Backup because the modified pages are 64%.
I check the status of the modified pages and the modified pages are at 5%.
smart_backup03
If I restart the stored procedure, I do a differential backup.
smart_backup04
My backup strategy is really adapted to the change of pages in the database and no more based on the time (RTO vs RPO).
Let’s go to the new DMV sys.dm_db_log_stats do to the same with the TLog backup.

DMV sys.dm_db_log_stats

This DMV gives really good information about the transaction log files and can help to adapt the backup strategy and also control the growth of the file.
The DMV is very easy to use and for example, if you want to have the growth of the size since the last TLog backup, use the column log_since_last_log_backup_mb

SELECT log_since_last_log_backup_mb from sys.dm_db_log_stats(DB_ID('smart_backup_2016'))
GO

smart_backup05
Like below, I create in our DMK maintenance an adapted TLOG Backup [dbi_smart_tlog_backup] smart_backup06
If the TLOG is growing more that 5 MB from the last TLOG backup, It will do a TLOG Backup and if not, no TLOG Backup.
In my example, the growth is 548 MB, then a TLOG Backup is necessary.
smart_backup07
After, I control the size and as you can see the size since last TLOG Backup is 0.07MB
smart_backup08
As you can see, no TLOG backup… My backup strategy is adapted to the load! ;-)
smart_backup09

DMV sys.dm_db_log_info

This DMV will help us to have all VLF(Virtual Log File) information and no more using the DBCC Loginfo.
You can use this DMV very easily like this:

SELECT [name] AS 'Database Name', COUNT(l.database_id) AS 'VLF Count'
FROM sys.databases s
CROSS APPLY sys.dm_db_log_info(s.database_id) l
GROUP BY [name]

smart_backup10

These DMVs are very helpful and it is a good thing to have it also in SQL Server 2016 now.

 

Cet article SP2 for SQL Server 2016 is available with new helpful DMVs est apparu en premier sur Blog dbi services.

SQL Developer Web on the Oracle Cloud

Thu, 2018-05-10 12:21

You like SQL Developer because it is easy to install (just unzip a jar) and has a lot of features? Me too. It can be even easier if it is provided as a web application: no installation, and no java to take all my laptop RAM…
When I say no installation, you will see that you have some little things to setup here in DBaaS. That will probably be done for you in the managed services (PDBaaS) such as ‘Express’ and ‘Autonomous’ ones.

CaptureSDW010
Be careful, Oracle is a Top-Down deployment company. It seems that new products are announced first and then people have to work hard to make them available. Which means that if, like me, you want to test them immediately you may encounter some disappointment.
The announce was there. The documentation was there, mentioning that the Cloud Tooling must be upgraded to 18.2.3. But 18.2.3 was there only a few days later. You can check it from the place where the DBaaS looks for its software. Check from https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/dbaas_patch if you a are not sure.

So, before being able to see SQL Developer in the colorful DBaaS landing page (where you can also access APEX for example) there’s a bit of command line stuff to do as root.

Install the latest Cloud Tooling

SQL Developer Web needs to be installed with the latest version of ORDS, which is installed with the latest version of Cloud Tooling aka dbaastools.rpm

You need to connect as root, so opc and then sudo

ssh opc@144.21.89.223
sudo su

Check if there is a new version to install:

dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}'

If something is returned (such as 18.2.3.1.0_180505.1604) you install it:

dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')

Actually I got an error, and I had to ^C:

[root@DB18c opc]# dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')
DBAAS CLI version 1.0.0
Executing command dbpatchm --run -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
/var/opt/oracle/patch/dbpatchm -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
Use of uninitialized value in concatenation (.) or string at /var/opt/oracle/patch/dbpatchm line 4773.
^C

But finally, it was installed because the ‘list_tools’ above returns nothing.

Enable SQL Developer Web

SQL Developer Web (SDW) is running in ORDS (Oracle REST Data Services) and must be enabled with the ORDS Assistant with the enable_schema_for_sdw action.
Here I’ll enable it at CDB level. I provide a password for the SDW schema. I create it in a file:

cat > password.txt <<<'Ach1z0#d'

You may secure that better than I do, as I’m putting the password on command line here. But this is only a test.

Then, still as root, I call the ORDS assistant to install SDW in C##SQLDEVWEB (as I’m installing it in CDB$ROOT I need a common user name).


/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema="C##SQLDEVWEB" -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true

Here is the output. The last lines are important:

WARNING: Couldn't obtain the "dbname" value from the assistant parameters nor the "$OCDE_DBNAME" environment variable
Starting ORDS
Logfile is /var/opt/oracle/log/ords/ords_2018-05-10_10:44:12.log
Config file is /var/opt/oracle/ocde/assistants/ords/ords.cfg
INFO: Starting environment summary checks...
INFO: Database version : 18000
INFO: Database CDB : yes
INFO: Original DBaaS Tools RPM installed : dbaastools-1.0-1+18.1.4.0.0_180123.1336.x86_64
INFO: Actual DBaaS Tools RPM installed : dbaastools-1.0-1+18.2.3.1.0_180505.1604.x86_64
INFO: DBTools JDK RPM installed : dbtools_jdk-1.8.0-2.74.el6.x86_64
INFO: DBTools JDK RPM "/var/opt/oracle/rpms/dbtools/dbtools_jdk-1.8.0-2.74.el6.x86_64.rpm" MD5 : 48f13bb401677bfc7cf0748eb1a6990d
INFO: DBTools ORDS Standalone RPM installed : dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64
INFO: DBTools ORDS Standalone RPM "/var/opt/oracle/rpms/dbtools/dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64.rpm" MD5 : 480355ac3ce0f357d5741c2c2f688901
INFO: DBTools DBaaS Landing Page RPM installed : dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64
INFO: DBTools DBaaS Landing Page RPM "/var/opt/oracle/rpms/dbtools/dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64.rpm" MD5 : af79e128a56b38de1c3406cfcec966db
INFO: Environment summary completed...
INFO: Action mode is "full"
INFO: Database Role is "PRIMARY"
INFO: Enabling "C##SQLDEVWEB" schema in "CDB$ROOT" container for SQL Developer Web...
 
SQL*Plus: Release 18.0.0.0.0 Production on Thu May 10 10:44:27 2018
Version 18.1.0.0.0
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
 
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL Developer Web user enable starting...
Enabling "C##SQLDEVWEB" user for SQL Developer Web...
 
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Creating "C##SQLDEVWEB" user
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Call completed.
Commit complete.
PL/SQL procedure successfully completed.
Session altered.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
 
"C##SQLDEVWEB" user enabled successfully. The schema to access SQL Developer Web
is "c_sqldevweb"...
 
PL/SQL procedure successfully completed.
 
SQL Developer Web user enable finished...
Disconnected from Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
INFO: To access SQL Developer Web through DBaaS Landing Page, the schema "c_sqldevweb" needs to be provided...
INFO: "C##SQLDEVWEB" schema in the "CDB$ROOT" container for SQL Developer Web was enabled successfully...
 

The information to remember here is that I will have to provide the c_sqldevweb schema name (which is the schema name I’ve provided but lowercased and with sequences of ‘special’ characters replaced by an underscore). It is lowercased, but it seems that the schemaname has to be provided in uppercase.

Basically what has been done is quite simple: create the C##SQLDEVWEB user and call ORDS.ENABLE_SCHEMA to enable it and map it to the url.

DBCS Landing Page 2.0.0

Now I’m ready to see SQL Developer on the DBCS Landing Page. You access this page by:

  1. Enabling https access from internet (in Access Rules, enable ora_p2_httpssl)
  2. going to default web page for your service, in my case https://144.21.89.223

You may have to accept some self-signed certificates

And here it is with SQL Developer Web in the middle:
CaptureSDW011

The above shows PDB1/pdbadmin for the schema but I installed it at CDB level and the log above tells me that the schema is c_sqldevweb, so given the input, I change the schema to c_sqldevweb then on the login page. Finally, the direct url in my example is https://144.21.89.223/ords/c_sqldevweb/_sdw.

I enter C##SQLDEVWEB (uppercase here) as the user and Ach1z0#d as the password.

And here is the Dashboard:
CaptureSDW012

Do not worry about the 97% storage used which tells me that SYSTEM is full. My datafiles are autoextensible.

Just go to the SQL Worksheet and check your files:

select tablespace_name,bytes/1024/1024 "MBytes", maxbytes/1024/1024/1024 "MaxGB", autoextensible from dba_data_files

Enable SDW for local PDB user

To enable a PDB local user, I run ORDS assistant with a local user name (PDBADMIN here) and an additional parameter with the PDB name (PDB1 here).


cat > password.txt <<<'Ach1z0#d'
/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema=PDBADMIN -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true -ords_sdw_schema_container=PDB1

Now, I can connect to it with PDB1/pdbadmin as schema name.

Error handling

CaptureRestCallFail
If, like me, you are not used to ORDS applications, you may waste some minutes looking at a splash screen waiting for the result. Always look at the message bar. All actions are REST calls and the message bar will show if a call is running or completed successfully or not. The example on the right shows ‘call failed’. You can click on it to see the REST call, and the error.

 

Cet article SQL Developer Web on the Oracle Cloud est apparu en premier sur Blog dbi services.

Some ideas about Oracle Database on Docker

Tue, 2018-05-08 15:55

This is not a ‘best practice’ but just some ideas about building Docker images to provide an Oracle Database. I started with the images provided by Oracle: https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance and this is great to validate the docker environment. Then I customized for my needs and here are the different points about this customization.

Do not send a huge context at each build attempt

I work by iteration. Sending a 3GB context each time I try a build is a no-go for me. Then I quickly stopped to put the Oracle installation .zip in the context of my build. I already blogged about this.

There are several ways to avoid to send a big context, such as having the .zip in an NFS or HTTP server and ADD it or RUN wget from there. I prefer to build one small container with this .zip that I’ll use later

In my current directory I have the linuxx64_12201_database.zip which I explicitly send to the context with this .dockerignore:

*
!linuxx64_12201_database.zip

And I build a franck/oracle122/zip image with it:

FROM oraclelinux:7-slim
ADD linuxx64_12201_database.zip /var/tmp

When done, I’ll not have to send the context again and I will build my container from this one with another Dockerfile:

FROM franck/oracle122/zip
RUN yum -y install oracle-database-server-12cR2-preinstall unzip tar wget openssl vi && rm -rf /var/cache/yum

Do not yum install at each build attempt

In the same idea, I build another intermediate image with the yum install above. The reason is that once I have it, I don’t need internet access anymore. I did that before boarding for an 8 hours flight. I build the above Dockerfile as franck/oracle122/prereq while on airport wifi and will use it later as the base for the final Dockerfile:

.dockerignore:

*

Dockerfile:

FROM franck/oracle122/prereq
# then follow all the work which do not need large context or internet connection
...

Even if you are not on a plane, it is always good to avoid internet access. You probably had to get some doors opened in the firewall in order to pull the base image. Now that you have it, you should keep it. Or one day, the security team will close the door again and you will waste a few hours. That also means that you do not start with a :latest image but with a specific version.

Do the long steps first

The Dockerfile provided by Oracle starts with all ENV and a COPY to add all scripts into the container. The problem is that each time you want to change a script, the build has to start from this step. And then the long operations have to be done again: unzip, install,…

I have a small context here (only the scripts and configuration files) but I ADD or COPY them only when needed. For example, here, a modification in install.rsp will re-do the runInstaller step, but the unzip one will not have to be done again because the cache is re-used:

WORKDIR /var/tmp
RUN unzip linuxx64_12201_database.zip
COPY install.rsp /var/tmp
RUN ./database/runInstaller -silent -force -waitforcompletion -responsefile /var/tmp/install.rsp -ignoresysprereqs -ignoreprereq ; true

The script that will run the container is added only at the end so that I can modify and re-build quickly without re-doing the previous steps.

VOLUME ["/opt/oracle/pdbs"] EXPOSE 1521 5500
COPY docker_cmd.sh /opt/oracle
CMD exec /opt/oracle/docker_cmd.sh ${CDB} ${PDB

Another step that I do at the end is removing the files I do not need in the container. Because that’s a guess and try approach and I want to build quickly. Of course, this may not be optimized for the size of all those layers, but I can reduce the final image later. The main feature of Docker build are the layers and I use them to develop the Dockerfile without wasting my time. For the waste of storage, I use ZFS with block level Cow, dedup and compression. For the final image, I’ll –squash it.

Remove all unnecessary files

The detail will probably go into a future blog post. But, as soon as runInstaller is done, and latest bundle patch applied, you can remove a lot of directories that I do not need anymore:

rm -rf $ORACLE_HOME/inventory $ORACLE_HOME/.patch_storage

As soon as the database has been created with DBCA, I do not need the DBCA templates anymore:

rm -rf $ORACLE_HOME/assistants

As this container will run only the instance, I can remove:

rm -rf $ORACLE_HOME/sqldeveloper $ORACLE_HOME/suptools $ORACLE_HOME/jdk

And depending on the options I will provide in the database, I remove the big ones:

rm -rf $ORACLE_HOME/apex $ORACLE_HOME/javavm $ORACLE_HOME/md

There is also a lot to remove from $ORACLE_HOME/lib (I need only a few *.so* that I can determine with strace, perf, lsof, ldd) and from $ORACLE_HOME/bin (basically, I need oracle, tnslsnr, lsnrctl, and sqlplus). Those are executables and you can strip them to reduce the size further. Definitely remove the last relink ones renamed as oracleO, …

Those are just examples, your list will depend on your version and usage, but this may reduce the image to 1GB or less. Of course, this is not supported. But the goal is to provide a small development database. Not an reliable and efficient one for production.

Use ZFS for the storage driver

An Oracle Database is full of large files that are updated sparsely. Just forget about OVERLAY and OVERLAY2 which copies the whole file to the new layer when you update a single byte of a file. I do not consider BTRFS seriously. In my opinion, ZFS is the only filesystem to consider for storing Docker images with large files. Enforce deduplication and compression to overcome the inflation of layering and the ignorance of sparse files. I think that recordsize=32k is a good idea from what I’ve seen about how docker applies writes to layers. More detail in a future blog post.

Note that layering issues are not only for build efficiency but also for container run. You will see that I put some datafiles in the image. Then, at database open, some blocks are changed (at least the headers) and I do not want a full file copy to the runnable layer. Block level CoW is required for that.

Create the CDB in the container

The container is the place to store all the software, and most of CDB$ROOT and PDB$SEED is part of the software distribution. This is what takes time when creating a database (catalog, catproc,…) and I definitely refuse to give a container to a developer where he will have to wait 10 minutes at run because the database has to be created on the external volume. A ‘docker run’ must be fast. And the external volume must contain only the data that has to be persisted, not 500MB of dbms_% package code, which will be all the same for all containers from the same image.

This means that I create the CDB during the build:

RUN /opt/oracle/product/12.2.0.1/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName ${CDB} -sid ${CDB} -initParams db_unique_name=${CDB},service_names=${CDB},shared_pool_size=600M,local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))' -createAsContainerDatabase true -numberOfPdbs 0 -sysPassword oracle -systemPassword oracle -emConfiguration NONE -datafileDestination /opt/oracle/oradata -recoveryAreaDestination /opt/oracle/fast_recovery_area -recoveryAreaSize 1024 -storageType FS -sampleSchema false -automaticMemoryManagement false -totalMemory 1024 -databaseType OLTP -enableArchive false -redoLogFileSize 10 -useLocalUndoForPDBs true -createListener LISTENER:1521 -useOMF true -dbOptions OMS:false,JSERVER:false,SPATIAL:false,IMEDIA:false,ORACLE_TEXT:false,SAMPLE_SCHEMA:false,CWMLITE:false,DV:false,APEX:false
RUN rm -rf /opt/oracle/product/12.2.0.1/dbhome_1/assistants/dbca/*

No PDB here, as the PDB will be created at ‘docker run’ time into the external volume. I use a template with datafiles here, but I may prefer to run the whole creation to control the creation. I may even hack some .bsq and .sql files in RDBMS admin to reduce the size. I’m in archivelog mode here because I want to allow to flashback the PDB. The container is ephemeral. If it becomes too large (archive logs, audit, …) just remove it and start another one. Or add a script to remove the old ones (those not required by guarantee restore points).

Create the PDB in the external volume

The PDB is the only thing that must be persistent, and controlled by the developer. I create it with the following in my docker_cmd.sh (which is called from the Dockerfile CMD line providing CDB name and PDB name as arguments) script:

create pluggable database $2 admin user admin identified by oracle create_file_dest='/opt/oracle/pdbs';
alter pluggable database $2 open;
show pdbs

The PDB is bound to the external volume ( VOLUME [“/opt/oracle/pdbs”] ) thanks to 12.2 CREATE_FILE_DEST clause so that the developer can create datafiles only there. Then the ‘docker run’ is fast as a clone of PDB$SEED.

The developer will connect only to the PDB. He has nothing to do in CDB$ROOT. If there is a need to change something in CDB$ROOT, I’ll provide a new image. I may even define lockdown profiles rules to limit the PDB and define a listener where only the PDB registers.

Unplug the PDB at container stop

When the developer stops the container, I want to leave something consistent in the external volume. The way to do that quickly is a PDB unplug. An unplug to a PDB archive (a .pdb zip with all datafiles) would be nicer, but that takes too much time to create. I unplug to a .xml file. This is what I do on stop (SIGTERM and SIGSTOP):

alter pluggable database all close;
column pdb_name format a30
select pdb_id,pdb_name,status from dba_pdbs;
begin
for c in (select pdb_name from dba_pdbs where pdb_id>2) loop
dbms_output.put_line('-- Unpluging '||c.pdb_name);
execute immediate 'alter pluggable database "'||c.pdb_name||'" unplug into ''/opt/oracle/pdbs/'||c.pdb_name||'.xml''';
end loop;
for c in (select pdb_name from dba_pdbs where pdb_id>2 and status='UNPLUGGED') loop
dbms_output.put_line('-- Dropping '||c.pdb_name);
execute immediate 'drop pluggable database "'||c.pdb_name||'" keep datafiles';
end loop;
end;
/
-- All pluggable databases have been unplugged:
host ls /opt/oracle/pdbs/*.xml
-- Shutdown the CDB:
shutdown immediate;
-- You can plug those PDBs by passing the volume to a new container

The script iterates on all PDBs but I see no reason to create more than one. I unplug the PDB and drop it, and then shutdown the instance. We need the unplug to be completed before the stop timeout. The container may be killed before the drop or shutdown, but as long as we have the .xml we can plug the PDB into a new container.

Plug the PDB at container re-start

I mentioned earlier that at start I create the pluggable database mentioned by ${PDB}. But this is only when there is no /opt/oracle/pdbs/${PDB}.xml
If this file is found, this means that we provide a PDB that was unplugged by a previous container stop.
Actually, when the start detects this file, the following will be run:

whenever sqlerror exit failure;
create pluggable database "${PDB}" using '/opt/oracle/pdbs/${PDB}.xml';
host rm /opt/oracle/pdbs/${PDB}.xml
alter pluggable database "${PDB}" open;
select message from pdb_plug_in_violations;

Finally, because I may start a container which has a newer Release Update than the one which unplugged my PDB, I run:

$ORACLE_HOME/OPatch/datapatch

One PDB per container

My scripts process all PDBs but I think that in most cases we need to have a one-to-one relationship between the container and the PDB. The idea is to provide a database that is ready to use and where no administration/troubleshooting is required. The key here is to keep it simple. If you need to provide a large CDB with several PDBs, then Docker is not the solution to your problem. A virtual machine is a better answer for that.

SPfile? Password file?

The image build provided by Oracle stores the persistent configuration files with the database, in the external volume, through symbolic links from ?/dbs. But with my design, I don’t need to. The configuration of the instance, running in the container, is within the container. The passwords for SYS is in the container. Then SPfile and password files stay in the container. The runnable image is not read-only. It is writeable. We can write here as long as the changes do not have to persist beyond the container end of life.

The ‘scope=spfile’ parameters that can be modified by the developer will be PDB parameters. They are persisted because they go to the .xml file at unplug. Only in case of crash, without a clean unplug, those parameters may be stored only in the container. That’s a special case. A crashed container is not dead and jsut waits to be re-started.

Crash recovery needs the same container

There’s one big flaw with my CDB-in-container/PDB-in-volume design. The whole database datafiles must be consistent, are checkpointed together, and are all protected by the same redo stream, which is located in the container. But what’s the real problem about that? If the container is cleanly stopped, the PDB is unplugged and there is a clear separation between my external volume and the container. And both are consistent.

However, if the container crashes, the datafiles in my external volume are fuzzy and need recovery. This cannot be done without the CDB files which are on the container. This has only one consequence: the user must know that if the container was not cleanly stopped, she will need to start the PDB with the same container. I don’t think this is a real problem. I just ensure that the user gets the warning (a big README file in the external volume for example, created at start and removed at clean stop) and that the container will always be able to recover (no 100% full filesystem at start – anyway I have some log cleanups at start).

Handle all errors and signals

My startup script handle 3 situations.
The first one is the first start after creation of the container. This creates the pluggable database.
The second one is the re-start after a clean stop. This plugs the existing pluggable database.
The third one is crash-recovery after a kill. This just runs the automatic instance recovery.

Then the startup script will run in a loop, either tailing the alert.log or displaying some status info every minutes.

But before all of that, the startup script must handle the termination signals.

The clean stop is handled by the following signals:

trap 'shutdown SIGINT' SIGINT
trap 'shutdown SIGTERM' SIGTERM
trap 'shutdown EXIT' 0

SIGINT is for ^C when running the container, SIGTERM is when ‘docker stop’, and the signal 0 is when the container exits by itself. This can happen when my ‘tail -f’ on alert log is killed for example. All of them call my shutdown() procedure which is trying a clean stop (unplug the PDBs).

When the stop timout is expired or when we do a ‘docker kill’, there’s no time for that. The only thing I do here before a shutdown abort is an ‘alter system checkpoint’ to try to reduce the recovery needed. And display a WARNING message saying that the container that was killed must not be removed but be re-started asap to recover the PDB in the external volume. Maybe explicitly name the container and the command to re-start.

I do that with an abort() function called by the following:

trap 'abort SIGKILL' SIGKILL

The kill -9 of the instance, or container crash, cannot be handled. Recovery is needed as for the SIGKILL one. Here is the reason for keeping a permanent README file near the PDB to explain that the container which crashed should be restarted as soon as possible to recover this.

Conclusion

This is not a recipe of how to build an Oracle Database Docker image, but just some ideas. The most important is to know the requirement. If you provide Oracle on Docker just because the developers want that, the solution will probably be wrong: too large, too long, inefficient, and too complex,… They will not use it and they will tell everybody that Oracle is not cool because it cannot be dockerized.
CaptureDockerCDBPDB
With my PDB-in-volume / CDB-in-container design, I have :

  • Docker Images with the ephemeral software, one per version (down to patches), and with different set of component installed
  • External volume (personal directory in a NFS share, or a local one) with the persistent data and settings
  • Containers running the software on the data, linking them together for the whole lifecycle of the container

Think of them as 2 USB sticks, one with the software (binaries and packages), and one with the data (user metadata and data). When plugged together on the same container, it runs one version of software with one state of data. If the container crashes, you just run it again without unplugging any of the sticks. When you are done with your test or development iteration, you stop the container and remove it. Then you have unplugged the sticks to run another combination of data and software.

 

Cet article Some ideas about Oracle Database on Docker est apparu en premier sur Blog dbi services.

idql and its column output

Mon, 2018-05-07 03:23
idql and its column output

A few days ago, I was reading an interesting blog from distinguished colleague Clemens Bleile with the title “sqlplus and its column output” (link here https://blog.dbi-services.com/sqlplus-and-its-column-output/) and I said to myself: the lucky Oracle administrators and developers have sqlplus, a rather good, out of the box command-line tool to talk to their databases. What equivalent tool do we have with Documentum ? Well, we have mainly idql, which, to put it mildly, sucks. Unlike sqlplus, idql has no column formatting, no reporting, no variable substitution, no error trapping, actually almost nothing, not even command editing or command history (at least, under Unix/Linux). It just reads DQL statements, passes them to the content server and displays back the received answer. Pretty basic. However, for their defense, the Documentum creators gave away the source code of an ancient version (look for $DOCUMENTUM/share/sdk/example/code/idql.c), so it is relatively easy to enhance it the way you like.
Needless to say, having a nicely displayed output is not possible within idql. Whereas sqlplus’ column formatting allows to control the column width to make it narrower and thusly avoids those unending lines filled with spaces, in idql a query is displayed as is. For example, in sqlplus, “column mycol format A10″ tells sqlplus to display the column mycol as an alphanumeric value in a field not larger than 10 characters; exceeding characters are wrapped around on the next line(s). However, if, many columns are SELECTed, long result lines are unvoidable in both sqlplus and idql and the solution proposed in Clemens’ blog can help with idql too since it applies to the terminal as a whole.
Hereafter though, I’d like to propose a few alternatives that don’t require another terminal software, although they may somewhat lack in interactivity. One of them uses the less command, another uses less + a named pipe and a third one a simple awk script to compact and reflow a query’s output. Here we go.
1. Use less command
If reading the output can be done separately from entering the commands into idql, “less -S” is pretty cool:

# run the select and output the result into a text file;
idql dmtest -Udmadmin -Pdmadmin -w100 > tmp_file
select * from dm_sysobject
go
quit
EoQ
# now use less -S to examine the result;
less -S tmp_file

blog1blog2blog3
Now, it is possible to scroll left and right and have a good look at the result.
Some columns, though, are so wide and filled with trailing blanks that it is quite distracting. This will be taken care of later in the last alternative.

2. A more interactive variant with less -S
It is possible to stay in idql while the output is being redirected into a named pipe which is read by “less -S” and displayed in a second terminal. To do this, follow the steps below:
a. Create a named pipe named idqlp:

mknod -p idqlp

Contrary to the usual anonymous pipes, named pipes have, well, a name, and are created like files in a filesystem. As expected, like their counterparts, they can be written to and read from.
b. copy/paste the following command, it will create a pre-processor script for the less command:

cat - <<EoScript > lesspipe.sh
#! /bin/sh

# must define LESSOPEN environment variable to be used;
# export LESSOPEN="|~/lesspipe.sh %s"

case "$1" in
   idqlp)
      cat idqlp
      ;;
   *) exit 1
      ;;
esac
exit $?
EoScript

The preprocessor script will be invoked when less is launched, right before it. This is a nifty feature of less which allows to play tricks with binary files, e.g. decompressing them before viewing them (if they are compressed files). I guess less-ing a java class file could first invoke a decompiler and then pass the result to less. There is also a postprocessor for tasks to be performed after less exits, such as cleaning up the intermediate file created by the preprocessor. All this is very well presented in less’ man page.
c. make it executable:

chmod +x lesspipe.sh

d. copy/paste the following command in a terminal, it will create the consumer script that will continuously be reading from the named pipe idqlp:

cat - <<EoScript > readpipe.sh
#! /bin/bash

export LESSOPEN="|~/lesspipe.sh %s"

while [ true ]; do
   less -S idqlp
done
EoScript

e. make it executable:

chmod +x readpipe.sh

f. in the first terminal, run this script in the foreground:

./readpipe.sh

g. in the second terminal, run idql with a redirection into the named pipe idqlp:

idql dmtest -Udmadmin -Pdmadmin -w100 > idqlp

f. now, type your DQL statements with or without those endless lines:

execute show_sessions

go

select * from dm_sysobject
go

select * from dm_document
go

Result:

blog7

The DQL output of the second terminal (bottom) is displayed in the first terminal (top) where it can be browsed by less.
This trick works quite well as long as a few precautions are respected. As you know, a pipe blocks the producer when it gets full until the consumer starts reading it at the other end. Thus, idql is blocked as long as less has not finished reading its output; for short outputs, no special action is required but for those long listing, in order to force less to reach the end of the output, type shift-g in the less window; give it enough time so the DQL statement completes its output, then ctrl-C. idql is then released whereas the output can be quietly navigated from within less in the first terminal. Once done, BEFORE entering any new command in idql, quit less (command q) so the next cycle begins. Now, the next command can be typed in idql. Failure to do this can hang the consumer in the first terminal and the commands below must be used to get it back on track:

ctrl-Z         # send the consumer into the background;
jobs -l        # identify the pid that's messed up;
kill -9 pid    # send it ad patres;
./readpipe.sh  # restart the consumer;

Or use the one-liner:

jobs -l | grep readpipe.sh | cut -d\ -f3 | xargs kill -9

Sometimes, even the producer process must be restarted. If all this looks clumsy at first, once you get the habit of it, it becomes quite automatic.
This alternative is nice because it avoids cluttering the command window: the DQL commands are separated from their output and therefore stay visible in the second terminal. Moreover, as illustrated, error messages don’t show in the less-ed output.

3. The third alternative: compact the output
Eventhough those pesky long lines are now tamed, the issue of those extra-wide columns mostly filled with blanks remains and this alternative is aimed at it.
Firstly, here is where we take our inspiration from, Oracle sqlplus. Consider the SQL query below:
blog5
— The USER_PASSWORD column is still too wide, let’s narrow it:
blog6
See what happened here ? Column USER_PASSWORD’s text has been reflowed inside the column’s width, not truncated.
OK, we want all these 3 things:
. compact the columns by removing trainling blanks;
. control the columns width by resizing the way we like;
. introduce column wrapping if our width is too narrow;
Obviously, since we don’t have the source code of idql, we cannot enhance the way it displays the query results so we will do this outside idql and in 2 steps, execute the query and capture its output to process it.
The output processing is performed by the following awk script:

-- compact_wwa.awk;
# Usage:
#    gawk -v maxw=nn -f compact_wwa.awk file
# or:
#    cmd | gawk -v maxw=nn -f compact_wwa.awk
# where:
#     maxw is the maximum column width; characters outside this limit are wrapped around in their own column;
# example:
#     gawk -v maxw=50 -f compact_wwa.awk tmp_file  | less -S
# C. Cervini, dbi-services.com
BEGIN {
   while (getline && !match($0, /^([0-9]+> )+/));
   header = substr($0, RLENGTH + 1)
   getline
   nbFields = NF
   fs[0] = 0; fw[0] = -1 # just so that fs[1] = 1, see below;
   headerLine = ""; sepLine = ""
   for (i = 1; i <= NF; i++) {
      fs[i] = fs[i - 1] + fw[i - 1] + 2
      fw[i] = length($i)
      sepLine = sepLine sprintf("%s  ", substr($0, fs[i], min(fw[i], maxw)))
   }
   printWithWA(header)
   printf("%s\n", sepLine)
}
{
   if (match($0, /^\([0-9]+ rows? affected\)/)) {
      print
      exit
   }
   printWithWA($0)
}
function printWithWA(S) {
   do {
      left_over = ""
      for (i = 1; i <= nbFields; i++) {
         Min = min(fw[i], maxw)
         printf("%s  ", substr(S, fs[i], Min))
         subS = substr(S, fs[i] + Min, fw[i] - Min)
         if (length(subS) > 0) {
            left_over = left_over sprintf("%-*s  ", fw[i], subS)
         }
         else
            left_over = left_over sprintf("%*s  ", fw[i], "")
      }
      printf "\n"
      gsub(/ +$/, "", left_over)
      S = left_over
   } while (left_over)
}
function min(x, y) {
   return(x <= y ? x : y)
}

Now, let’s put it to use:

idql dmtest -Udmadmin -Pdmadmin <<EoQ | gawk -v maxw=35 -f compact_wwa.awk | less -S
select r_object_id, user_name, user_os_name, user_address, user_group_name, user_privileges, owner_def_permit, world_def_permit, group_def_permit, default_folder, user_db_name, description,
acl_domain, acl_name, user_os_domain, home_docbase, user_state, client_capability, globally_managed, user_delegation, workflow_disabled, alias_set_id, user_source, user_ldap_dn, user_xprivileges,
failed_auth_attempt, user_admin, user_global_unique_id, user_login_name, user_login_domain, user_initials, USER_PASSWORD, user_web_page, first_failed_auth_utc_time, last_login_utc_time,
deactivated_utc_time, deactivated_ip_addr, root_log_dir
from
   dm_user
go
exit
EoQ

By the way, funny thing, here “select user_password” is not the same as “select USER_PASSWORD”. The first returns a sequence of asterisks while the second an ASCII representation of the encrypted password. The generated SQL explains why.
“select user_password” gets compiled into the SQL statement below:

select all '****************' as user_password from dm_user_sp dm_user

whereas “select USER_PASSWORD” is the real one:

select all dm_user.USER_PASSWORD from dm_user_sp dm_user

Unlike unquoted column names in Oracle SQL, attribute names in DQL are case-sensitive !
And here is the result:

blog4

The script takes one parameter, maxw, the maximum column width. If the columns have too many characters, they are wrapped around on the next line(s) until the whole column has been displayed.

4. The altogether

What if we want the above line compaction and column wrapping around but interactively like in alternative 2 ? Easy. Just edit the script readpipe.sh and change line

cat idqlp

to

cat idqlp | gawk -v maxw=35 -f compact_wwa.awk

Said otherwise, we are preprocessing idql’s output through the awk filter before giving it to less.

A final alternative

We can achieve the same result with dmawk by writing a generic procedure that takes a DQL query to execute and a maximum column width, or a list of columns name and width (like if we entered a sequence of “col mycol format …” in sqlplus) so everything is done on the fly, but, as they say, this is left as an exercise to the reader. Or, why not, maybe in a blog to come.

 

Cet article idql and its column output est apparu en premier sur Blog dbi services.

ADWC – the hidden gem: Zepplin Notebook

Fri, 2018-05-04 14:22

IMG_5339
In the previous blog posts I explained how to create, and stop/start the Autonomous Data Warehouse Cloud service. And I didn’t show yet how to connect to it. It is easy, from sqlplus or SQL Developer, or SQLcl.

But there’s something more exciting to run some SQL queries: the Oracle Machine Learning Notebooks based on Apache Zepplin. At first, I didn’t realize why the administration menu entry to create users in the ADWC service was named ‘Manage Oracle ML Users’, and didn’t realize that the ‘Autonomous Data Warehouse Cloud’ header was replaced by ‘Machine Learning’.

But last week at IOUG Collaborate 18, I visited the Demo Grounds and thanks to Charlie Berger I realized all the power of this: we are in the ‘Machine Learning’ interface here and the home button opens all the features available to query the ADWC database, including the SQL Notebooks based on Apache Zepplin.

Here is the path to this hidden Gem. From your ADWC service, you go to the Service Console:
CaptureZepplin001

Here you log as the ADMIN user with the >12 characters password that you have defined at service creation. Don’t worry if you forgot it, you can reset it from here:
CaptureZepplin002

Once connected, you go to the Administration tab and choose the ‘Manage Oracle ML Users':

CaptureZepplin003

Here you have to create a user because the ADMIN user not a Machine Learning user. Machine Learning users need one of the following roles: OML_DEVELOPER, OML_APP_ADMIN, OML_SYS_ADMIN. The user you will create here will have OML_DEVELOPER which is required to use SQL Notebooks.

Now that you have a user created from here, you can click on this little house icon, which is your home in the Machine Learning part of the ADWC:

CaptureZepplin004

Here you connect with the user you have created from the Oracle ML User page (not the ADMIN one as it has no OML role granted).

CaptureZepplin005

Then you are in your OML home, ready to run SQL from a Notebook:

CaptureZepplin006

I’ll show what you can do in future post. But just to give you an idea, you have a Notebook where you can type a query, execute it, and have the result displayed as a table, or as a graph. Here I was looking at I/O latency and the following shows me that the ‘cell single block physical read’, which are nothing else than the buffered one-block-at-a-time reads that are called ‘db file sequential read’ when not on Exadata, in dark green here, have most of their I/O call time between 128 and 512 microseconds.

CaptureZepplin007

I like this way to have the result just under the query, with easy formatting. The code, documented, is at the same place as the result, in a notebook that is easy to refresh, or share. And you can export the whole in a simple JSON file.

 

Cet article ADWC – the hidden gem: Zepplin Notebook est apparu en premier sur Blog dbi services.

SQL Server Security: Are your databases ready for the GDPR?

Fri, 2018-05-04 04:57

The GDRP (General Data Protection Regulation), voted in 2016, will be applied in the European Union soon (May 25, 2018). We heard a lot of this new directive.
The goal is to improve the protection and confidentiality of personally identifiable information for every European citizen. Quid of Switzerland? Read the article of Gregory here.
A personal data is information that identifies a natural person, directly or indirectly. It can be a name, a photograph, an IP address, a phone number, a computer login, a mailing address, a fingerprint, a voice recording, social security number, email, etc. Some data are sensitive as they relate to information that may give rise to discrimination or prejudice.

Microsoft reacts with a Guide to enhancing privacy and addressing GDPR requirements with the Microsoft SQL platform and gives to the DBA a new feature with the latest version of SSMS (SQL Server Management Studio) 17: SQL Data Discovery and Classification.
This feature exists on SSMS since the version 17.5. I have installed the version 17.6 for this article.
Once SSMS updated, just choose the database you want to scan, right click on the database and select Task, Classify Data…:

gdpr_sql_01

After few seconds, you have the result of the scan and in my case 39 columns with classification recommendations:

gdpr_sql_02

Click on this result to see the detail:

gdpr_sql_03

For each column, you will see an Information Type and a Sensitivity Label.
The Information Type gives a better granularity of the type of data:

gdpr_sql_04

The Sensitivity label is to have the level of sensitivity of the data:

gdpr_sql_05

As you can see, you have 2 level for GDPR: Confidential and Highly Confidential.
You can identify some information like for example the personal phone number and change from Confidential – GDPR to Highly Confidential – GDPR:

gdpr_sql_06

After reviewed all columns and change the information Type and Sensitivity Label if needed, you can check all cases or only select few columns and click on “Accept selected recommendations”:

gdpr_sql_07

After that save the result. Without saving the result, you cannot see the report…
If you want, you have also the possibility to add manually a classification through the button “Add Classification”.
The message “The classification changes have been updated successfully”:

gdpr_sql_09

At the end you can click on View Report to have a global view and see with the application owner how to protect sensitive columns with features like Dynamic Data masking, Always Encrypted or a third-part tool.

gdpr_sql_08

If I go to a classified column in the objects explorer, a right-click and select properties, I can see in the Extended Properties the information about the information type and sensitivity label:
gdpr_sql_10
As you can see the information is directly written to the column properties (Action of the save button).

Like for Vulnerability Assessment, you need to run it database per database and you don’t have the possibility to scan in one  click all databases from an instance. It is also not possible to script it… Perhaps in the next verison! ;-)

It’s also available on Azure, click here for more information.

 

 

Cet article SQL Server Security: Are your databases ready for the GDPR? est apparu en premier sur Blog dbi services.

ADWC – a Docker container to start/stop Oracle Cloud services

Thu, 2018-05-03 01:08

In the previous post, I’ve explained how to start and stop the Autonomous Data Warehouse Cloud service from PSM (PaaS Service Manager). There’s a setup phase, and a run phase starting with service-start and ending with service-stop. And the setup is specific to an Oracle Cloud account, storing information in the local user home. You may want to run different setups, and even provide an easy way to start/stop an Oracle Cloud service without knowing the user, password and tenant name.

A Docker container is perfect to isolate this.

Dockerfile

Here is my quick (aka experimental) Dockerfile:

FROM alpine:latest
RUN apk add --update --no-cache python3 curl
ENV user=my.cloud.account@me.com
ENV password=MyP@ssw0rd
ENV tenant=idcs-31bbd63c3cb9466cb8a96f627b6b6116
ENV region=us
# get PSM
RUN curl -X GET -u ${user}:${password} -H X-ID-TENANT-NAME:${tenant} https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/${tenant}/client -o psmcli.zip
# install PSM
RUN pip3 install -U psmcli.zip
# setup PSM
RUN echo "{\"username\":\"${user}\",\"password\":\"${password}\",\"identityDomain\":\"${tenant}\",\"region\":\"${region}\",\"outputFormat\":\"short\"}" > config-payload &&\
psm setup --config-payload config-payload &&\
rm 421d64918638 # remove file with password
# patch PSM for bugs
RUN sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json
# variables which can be overwritten at run time
ENV service=ADWC
ENV name=MYADWC1
CMD trap 'echo "Stopping service ${service}...";psm ${service} stop-service --service-name ${name} -wc true ; exit 0' SIGINT SIGSTOP SIGKILL; echo "Starting service ${service}...";psm ${service} start-service --service-name ${name} -wc true || exit 1 ; echo "You can access to ${service} console with ADMIN user at:";echo;psm ${service} service -s ${name} -of json | jq -r '.serviceConsole';echo ; while sleep 60 ; do echo "Status of service ${service} at $(date)...";psm ADWC service -s ADWC ; done

We need curl to download PSM, and pip3 to install it, and python3 to run it.
You can set your Oracle Cloud Account credentials as environment variables.
Then it fills all required information in a ‘config-payload’ file, runs ‘psm setup’ and removes that file.

At run, it calls a ‘start-service’ and loops while showing the status every minute (you can see them with docker container logs). The INT, STOP and KILL signals call ‘stop-service’. Then, the idea is that while the container exists, the Cloud Service is running. And it is shutdown at the container end of life. There’s no other action to do with the container: it display the console url where you have everything to interact with the service (download client credentials, manage users, go to Machine Learning notebooks,…).

Run example:

Here is a simple example:

# docker run --rm franck/psm/awdc
 
Starting service MYADWC1...
Message: Job submitted successfully for start of service/system
Job ID: 25583108
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].
You can access to MYADWC1 console with ADMIN user at:
 
https://adwc.uscom-east-1.oraclecloud.com/console/index.html?tenant_name=idcs-31bbd63c3cb9466cb8a96f627b6b6116&database_name=MYADWC1
 
Status of service MYADWC1 at Sun Apr 29 18:20:50 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
Status of service MYADWC1 at Sun Apr 29 18:21:51 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
 
^C
 
Stopping service MYADWC1...
Message: Job submitted successfully for stop of service/system
Job ID: 25620930
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].

After 2 minutes I’ve hit ‘Control-C’ to stop the container. The service has been cleanly shut-down.

 

Cet article ADWC – a Docker container to start/stop Oracle Cloud services est apparu en premier sur Blog dbi services.

Load Balancing with Docker Swarm mode and SQL Server containers

Thu, 2018-05-03 00:57

Docker swarm mode provides an easy way to publish ports for services. Indeed, in this mode all nodes participate in an ingress routing mesh and accept connections to any service port published. Thus, all incoming requests are routed to available nodes hosting a service.

That’s a pretty cool feature but it has some drawbacks. Let’s say that a node goes wrong … In this case the application must retry connecting to the next available node and it will likely imply changing the application connection string or using a DNS redirection. A load balancer outside of the Swarm provides a better way to connect to the containers without having to worry about the cluster nodes from an application perspective. Thanks to the ingress routing all of the published services are available through any of the swarm nodes and the load balancer can be set to use the swarm private IP addresses without a concern of which node is hosting what service.

After taking a look at the existing open source projects / solutions in the market, I quickly reduced the scope of my search to only 2 ones: nginx and HAProxy. Each product has pros and cons but I choose HAProxy that fit well with my tests. In fact, I used more specifically the dockercloud-haproxy because it includes an HAProxy docker image that runs with Docker Swarm mode including docker-compose. I’m not sure this project is dead or alive referring to the dockerhub page that states the following: Unfortunately, this project is currently put into maintenance mode. Please do not send any PR for new features. We will still fix bugs if there is any. It exists some other alternatives as proxy.dockerflow but once again the dockercloud-haproxy project remained relevant for what I wanted to test.

Here my docker-compose file:

version: '3.1'
services:
  db: 
    build: .
    image: 127.0.0.1:5000/dbi_linux_sql2017:CU4
    ports: 
      - "1433:1433"
    volumes:
      - /u00/db2:/u00
      - /u01/db2:/u01
      - /u02/db2:/u02
      - /u03/db2:/u03
      - /u98/db2:/u98
    environment:
      - MSSQL_SA_PASSWORD_FILE=/run/secrets/mssql_sa_password
      - ACCEPT_EULA=Y
      - MSSQL_PID=Developer
      - MSSQL_USER=dbi
      - MSSQL_USER_PASSWORD_FILE=/run/secrets/mssql_user_password
      - TZ=Europe/Berlin
      - SERVICE_PORTS=1433
    networks:
      - back-tier
    deploy:
      replicas: 1
      placement:
        constraints: [node.role != manager]
      resources:
        limits:
          cpus: '1'
          memory: 3584M
        reservations:
          cpus: '1'
          memory: 3072M
    secrets:
      - source: mssql_sa_password
        target: mssql_sa_password
      - source: mssql_user_password
        target: mssql_user_password

  proxy:
      image: dockercloud/haproxy
      depends_on:
        - db
      environment:
        - BALANCE=leastconn
        - ADDITIONAL_SERVICES=project_dir:db
        - MODE=tcp
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock
      ports:
        - 80:80
        - 1936:1936
      networks:
        - back-tier
      deploy:
        placement:
          constraints: [node.role == manager]

networks:
  back-tier:
    external:
      name: backend-server
  front-tier:
    external:
      name: frontend-server

secrets: # top level secrets block
  mssql_sa_password:
    external: true
  mssql_user_password:
    external: true

 

It includes 2 services:

  • db (my SQL Server container)
  • proxy (my HAProxy server)

I used some Swarm Mode specific environment variables:

  • SERVICE_PORTS=1433 – to expose the port of my db service (mssql default port)
  • BALANCE=leastconn – because it is recommended with long sessions as LDAP, SQL, TSE etc…
  • ADDITIONAL_SERVICES=project_dir:db – to identify the db service from the proxy service
  • MODE=tcp – mode of load balancing for HAProxy (TCP in my case)

I deployed the stack as following:

$ docker stack deploy -c docker-compose-test.yml mssql
Ignoring unsupported options: build

Creating service mssql_db
Creating service mssql_proxy

 

My service state was as follows:

$ docker service ls --filter name=mssql
ID                  NAME                MODE                REPLICAS            IMAGE                                  PORTS
26zdisl9r64y        mssql_db            replicated          1/1                 127.0.0.1:5000/dbi_linux_sql2017:CU4   *:1433->1433/tcp
nhje3081gwr8        mssql_proxy         replicated          1/1                 dockercloud/haproxy:latest             *:80->80/tcp,*:1936->1936/tcp

 

Concerning the mssql_proxy service we may notice the exposed ports 80 and 1936. The former will be used to connect to the HAProxy for the SQL Server connection redirection and the latter concers the default port to get HAProxy stats information (from a web browser).

blog 132 - 2 - docker swarm node status1

Here an example of my HAProxy output:

blog 132 - 1 - docker swarm HA proxy stats

The db service task (mssql_db.1.o9s2xxxxx) – that corresponds in fact to my service task (or container) is well-identified by the HA proxy.

So, I tried to connect to my SQL Server DB container through the HAProxy:

C:\Users\clustadmin>sqlcmd -S docker1,80 -Usa -PPassw0rd1 -Q"SELECT @@SERVERNAME"

---------------------------------------------------------------------------------
cf7f9d6036f3

 

It worked like a charm!

The next step consisted in simulating a failure of my docker node DOCKER3 …

blog 132 - 3 - docker swarm node status2

… and try again a connection to the SQL Server DB container that has restarted on DOCKER2 node. As expected, the connection kept working and was automatically redirected by the HAProxy to the next available node. Obviously in this case the redirection was not transparent for the application. We have to take to into account the short period of outage that corresponds to the mssql_db container restart duration. By the way, if you take a look at the HAProxy default_service section you may notice that the task id has changed (mssql_db.1.zo8yqxxxxxx) meaning the system has created a new task (or container). However, my mssql_db container host name remained the same in my case.

blog 132 - 4 - docker swarm HA proxy stats2

C:\Users\clustadmin>sqlcmd -S docker1,80 -Usa -PPassw0rd1 -Q"SELECT @@SERVERNAME"

---------------------------------------------------------------------------------
cf7f9d6036f3

 

Another interesting point I had to dig further concerns the HAProxy health check by itself. In my case I used TCP-based health check to ensure the service is alive but it is probably not good enough to check if my SQL Server instance (in other words my application) is working correctly. I spent some times to read articles and documentations and I discovered we may use custom health checks with HAProxy with databases as MySQL, PostgreSQL and Redis. Unfortunately, at the time I’m writing this blog post nothing concerning SQL Server probably because it is pretty new in this area (maybe I get wrong … so please feel free to comment) . I found out some other alternatives as xinetd that may be used to trigger a custom routine (by using sqlcmd command line tool for example) but in my context it makes the game a little bit more complex because I have to include it to my SQL Server docker image. After thinking a little bit about this issue, I realized we already implemented such routine directly inside the SQL Server image itself (HEALTHCHECK section) that uses precisely the sqlcmd command line tool to check the SQL Server connection is working correctly.

/opt/mssql-tools/bin/sqlcmd -S localhost,$MSSQL_TCP_PORT -U sa -P ${SA_PASSWORD} -Q "select 1"

 

By the way, the task status can be viewed through the docker ps command as follows:

$docker ps -f name=mssql --format 'table {{.ID}}\t{{.Names}}\t{{.CreatedAt}}\t{{.Status}}'
CONTAINER ID        NAMES	CREATED AT                       STATUS
9bfe04a97617        mssql_db.1.zc8yqp9llmjdcn3df49izyj72	2018-05-01 23:01:39 +0200 CEST   Up 9 hours (healthy)

 

So, the combination of the docker health check routine and the HAProxy TCP health check seems to be a good solution to address different kind of failure including docker swarm node failures and application failures as well.

This is just the first step about using an HAProxy with SQL Server (redirection capabilities). You may also extend this first scenario with a low cost solution including multiples SQL Server Express containers to address scale-out needs with Reporting queries for instance. I remember Christophe Laporte(b) gave a session about some years ago. I will try to blog about soon!

See you!

 

 

 

 

 

Cet article Load Balancing with Docker Swarm mode and SQL Server containers est apparu en premier sur Blog dbi services.

How uid mapping works in Docker containers?

Wed, 2018-05-02 09:56

It can be interesting to see how uids between the docker host and docker containers are mapped. For example, for security concerns.
As a reminder, docker containers are based on two linux kernel features: linux namespaces and cgroups.

Basically, linux namespaces provide isolation for running processes and cgroups allows you to isolate resource usage.

Let’s first run a docker container. Here, we will run a mariadb docker in background with -d option

 
[docker@docker1 ~]$ docker run -d -e MYSQL_ROOT_PASSWORD=test123 mariadb
5c4450939d71814070945f86f9712ba78893417e2342fb48aafced8160cd0d15

Now the container mariadb is running. Let’s see what is happening on a host level.

 
[docker@docker1 ~]$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
polkitd   1729  1718  0 08:14 ?        00:00:00 mysqld

On a container level:

 
root@5c4450939d71:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
mysql        1     0  0 06:14 ?        00:00:00 mysqld
root       174     0  0 06:22 pts/0    00:00:00 bash

On the host level the mysqld process is running by polkitd and on a container level the process is running by mysql. Any ideas?
This is because the user id (UID) of the mysql user created in mariadb container corresponds to the same UID of the polkitd user on the host.

Let’s see what is the userid of the mysql user in the mariadb container

 
root@5c4450939d71:~# id mysql
uid=999(mysql) gid=999(mysql) groups=999(mysql)

The UID of mysql is 999. On the host:

 
[docker@docker1 ~]$ cat /etc/passwd | grep 999
polkitd:x:999:997:User for polkitd:/:/sbin/nologin

We can see that 999 corresponds to the polkitd user id.

How to change this?

Well, this could be a problem because we don’t want to run docker containers with a system user that we don’t know.

One solution could be to create a mysql user with a certain UID on the host:

 
[root@docker1 ~]# useradd -g mysql -u 1099 -m -r mysql

Then, we modify the user id inside the docker image. To do so, we need to rebuild a new mariadb image :-)
Let’s first clone the docker mariadb project

 
[docker@docker1 ~]$ git clone https://github.com/docker-library/mariadb.git
Cloning into 'mariadb'...
remote: Counting objects: 751, done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 751 (delta 9), reused 18 (delta 8), pack-reused 728
Receiving objects: 100% (751/751), 152.38 KiB | 0 bytes/s, done.
Resolving deltas: 100% (338/338), done.

We enter the directory of the mariadb version 10.3

 
[docker@docker1 ~]$ cd mariadb/10.3/

We need to modify the Dockerfile where all instructions are described

 
[docker@docker1 10.3]$ vi Dockerfile

Change this line

 
# vim:set ft=dockerfile:
FROM debian:jessie

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql

To this line

 
# vim:set ft=dockerfile:
FROM debian:jessie

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -g 1099 -r mysql && useradd -u 1099 -r -g mysql mysql

We rebuild a new image, let’s call it mariadbcustom

 
[docker@docker1 10.3]$ docker build -t mariadbcustom:latest .
Sending build context to Docker daemon  13.31kB
Step 1/19 : FROM debian:jessie
 ---> 5dd74d62fab8
Step 2/19 : RUN groupadd -g 1099 -r mysql && useradd -u 1099 -r -g mysql mysql
 ---> Using cache
 ---> a285892faa45
Step 3/19 : ENV GOSU_VERSION 1.10
 ---> Using cache
 ---> 069252945f7a
Step 4/19 : RUN set -ex;                fetchDeps='             ca-certificates                 wget    ';      apt-get update;         apt-get install -y --no-install-recommends $fetchDeps;   rm -rf /var/lib/apt/lists/*;            dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')";      wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch";     wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc";             export GNUPGHOME="$(mktemp -d)";        gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4;         gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu;       rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc;             chmod +x /usr/local/bin/gosu;    gosu nobody true;               apt-get purge -y --auto-remove $fetchDeps
 ---> Using cache
 ---> c82d4738b781
Step 5/19 : RUN mkdir /docker-entrypoint-initdb.d
 ---> Using cache
 ---> 08acd0843256
Step 6/19 : RUN apt-get update && apt-get install -y --no-install-recommends            apt-transport-https ca-certificates             pwgen   && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 3ed44a5e3cf5
Step 7/19 : ENV GPG_KEYS        199369E5404BD5FC7D2FE43BCBCB082A1BB943DB        430BDF5C56E7C94E848EE60C1C4CBDCDCD2EFD2A        4D1BB29D63D98E422B2113B19334A25F8507EFA5
 ---> Using cache
 ---> b30af869afbb
Step 8/19 : RUN set -ex;        export GNUPGHOME="$(mktemp -d)";        for key in $GPG_KEYS; do                gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key";   done;   gpg --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg;    rm -r "$GNUPGHOME";     apt-key list
 ---> Using cache
 ---> 7a6e03190271
Step 9/19 : RUN echo "deb https://repo.percona.com/apt jessie main" > /etc/apt/sources.list.d/percona.list      &> /etc/apt/preferences.d/percona
 ---> Using cache
 ---> e55705d326a2
Step 10/19 : ENV MARIADB_MAJOR 10.3
 ---> Using cache
 ---> bb3bc4adcf42
Step 11/19 : ENV MARIADB_VERSION 1:10.3.6+maria~jessie
 ---> Using cache
 ---> 05bb1dc686c8
Step 12/19 : RUN echo "deb http://ftp.osuosl.org/pub/mariadb/repo/$MARIADB_MAJOR/debian jessie main" > /etc/apt/sources.list.d/mariadb.list     &> /etc/apt/preferences.d/mariadb
 ---> Using cache
 ---> 3626c50c8d83
Step 13/19 : RUN {              echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password password 'unused';              echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password_again password 'unused';         } | debconf-set-selections      && apt-get update       && apt-get install -y           "mariadb-server=$MARIADB_VERSION"                percona-xtrabackup-24           socat   && rm -rf /var/lib/apt/lists/*  && sed -ri 's/^user\s/#&/' /etc/mysql/my.cnf /etc/mysql/conf.d/*        && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld      && chown -R mysql:mysql /var/lib/mysql /var/run/mysqld  && chmod 777 /var/run/mysqld    && find /etc/mysql/ -name '*.cnf' -print0                | xargs -0 grep -lZE '^(bind-address|log)'              | xargs -rt -0 sed -Ei 's/^(bind-address|log)/#&/'      && echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
 ---> Using cache
 ---> 7d3d52632798
Step 14/19 : VOLUME /var/lib/mysql
 ---> Using cache
 ---> 3880f6c65676
Step 15/19 : COPY docker-entrypoint.sh /usr/local/bin/
 ---> Using cache
 ---> 98aa1e3161c4
Step 16/19 : RUN ln -s usr/local/bin/docker-entrypoint.sh / # backwards compat
 ---> Using cache
 ---> a5394275c2b2
Step 17/19 : ENTRYPOINT ["docker-entrypoint.sh"]
 ---> Using cache
 ---> c456c7b34697
Step 18/19 : EXPOSE 3306
 ---> Using cache
 ---> 05068b456523
Step 19/19 : CMD ["mysqld"]
 ---> Using cache
 ---> 5973a27bfd43
Successfully built 5973a27bfd43
Successfully tagged mariadbcustom:latest

Let’s check our image is here

 
[docker@docker1 10.3]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
mariadbcustom       latest              5973a27bfd43        8 days ago          403MB

we run a docker container with our new customized image

 
[docker@docker1 10.3]$ docker run -d -e MYSQL_ROOT_PASSWORD=test123 mariadbcustom
7e344d87c4bc2a9c62298b9ec97aa4a331d8311cb1f077f47fcb673f1b3d8fa7

Let’s check if the user id was properly initialized to the mysql user

 
[docker@docker1 10.3]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
7e344d87c4bc        mariadbcustom       "docker-entrypoint.s…"   6 minutes ago       Up 6 minutes        3306/tcp            hungry_heisenberg
 
[docker@docker1 10.3]$ docker exec -it hungry_heisenberg /bin/bash
 
root@7e344d87c4bc:~# id mysql
uid=1099(mysql) gid=1099(mysql) groups=1099(mysql)

We check also that the mysqld process run on the host as mysql user

 
[docker@docker1 10.3]$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
mysql     2727  2716  2 14:05 ?        00:00:00 mysqld

On the host, we can see that the mysqld process runs as mysql user. Why? Because now the user id of the mysql user existing on the docker container corresponds to the one existing on the host. In this case, the user id is 1099.

Conclusion:

In some use cases you might want to use a specific user to run some process and not using root or whatever user.However, in order to do that you sometimes need to change the Dockerfile or add a specific user on the host.

 

Cet article How uid mapping works in Docker containers? est apparu en premier sur Blog dbi services.

Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 2

Wed, 2018-05-02 04:25

In this part of the blog posts series, we will show how ansible helps us to configure our cluster and install all pre-requisite needed for Cloudera Manager. Ansible is one of the most important automation tools currently.

Ansible will help us to configure all nodes for a manual installation using Cloudera Manager. Our playbook will contain the following roles:

  • cm_repo: add the same C.M repo into all nodes.
  • os_config: Adjust all OS parameter for installing a Cloudera cluster. 
  • java: Java JDK 1.7.80 installation.
  • cm_agents: Installation of the C.M agent’s packages
  • MariaDB: Installation of a MariaDB. C.M needs an Oracle, MySQL (MariaDB) or PostgreSQL database for Cloudera Manager meta-data storage and Hive meta-store.
  • mysql_connector: Installation of the MySQL connector for connecting to MariaDB. 
  • scm: Install and start the Cloudera Manager Server.

In a Big Data cluster, we split the node into roles.

  • Manager: dedicated node for all Cloudera Manager daemons
  • Master: NameNode daemon + Secondary NameNode daemon
  • Workers: DataNode daemons

The first step is to define the Ansible hosts inventory file. Below my inventory file.

[db_server]
manager ansible_host=<manager_ip> id=6

[cdh_manager]
manager  ansible_host=<manager_ip> id=6

[cdh_master]
master ansible_host=<master_ip>  id=5

[cdh_worker]
worker1 ansible_host=<worker1>  id=2
worker2 ansible_host=<worker2>  id=3
worker3 ansible_host=<worker3>  id=4

[cdh_servers:children]
cdh_worker
cdh_master
cdh_manager


[all:vars]
ansible_user=centos
ansible_ssh_pass=<YOUR_PASSWORD>
ansible_sudo_pass=<YOUR_PASSWORD>

We will now, define all variable needed for our roles. Variables are split into roles:

Below the example of variables definition for CDH server instances: cdh_servers.yml

---

db_hostname: "{{ hostvars[groups['db_server'][0]]['inventory_hostname'] }}"
scm_hostname: "{{ hostvars[groups['cdh_manager'][0]]['inventory_hostname'] }}"

cdh_version: 5.14.2
cluster_display_name: cluster_1

# Users and Groups
group:
  - dbi
user:
  - dbi

# Java variables
java_download_url: http://ftp.osuosl.org/pub/funtoo/distfiles/oracle-java/jdk-7u80-linux-x64.tar.gz
java_download_folder: /usr/java
java_name: "{{java_download_folder}}/jdk1.7_80"
java_archive: "{{java_download_folder}}/jdk-7u80-linux-x64.tar.gz"

# Mysql Java connector
mysql_java: mysql-connector-java-5.1.46
mysql_java_download_url: https://dev.mysql.com/get/Downloads/Connector-J/"{{mysql_java_archive}}"
mysql_java_download_folder: /usr/share/mysql-java/
mysql_java_archive: "{{ mysql_java_download_folder }}/{{ mysql_java }}.tar.gz"

mysql_java_jar: /usr/share/java/mysql-connector-java.jar

Same files will created for database server variable (db_server.yml) and Cloudera Manager server variables (scm_server.yml).

After the variables definition, we can start creating the different roles and their associated tasks.

 Cloudera Manager repo

The goal of this role is to add the same C.M repo in all cluster hosts. We will use a template of the repository file.

cloudera-manager.repo.j2

[cloudera-manager]
# Packages for Cloudera Manager, Version 5, on RedHat or CentOS 7 x86_64
name=Cloudera Manager
baseurl=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/{{cdh_version}}/
gpgkey=https://archive.cloudera.com/cm5/redhat/7/x86_64/cm/RPM-GPG-KEY-cloudera
gpgcheck = 1

cm_repo:

---
- name: Add Cloudera repo
  template:
    src: ../templates/cloudera-manager.repo.j2
    dest: "/etc/yum.repos.d/cloudera-manager{{cdh_version}}.repo"

The definition of the Cloudera Manager version has previously done in the cdh_servers.yml variable file.

OS Configuration

Some requirements are needed before installing a Cloudera cluster. This role will configure all hosts with Cloudera requirements: https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#cmig_topic_4 .

---
- name: Create groups
  group:
    name: "{{item}}"
    state: present
  with_items: "{{group}}"

- name: Create user
  user:
    name: "{{item}}"
    shell: /bin/bash
    uid: 1050
    groups: "{{group}}"
  with_items: "{{user}}"

- name: "Build hosts file"
  lineinfile:
    dest: /etc/hosts
    regexp: '.*{{ item }}$'
    line: "{{ hostvars
  • ['ansible_default_ipv4']['address'] }} {{item}}"     state: present   when: hostvars
  • ['ansible_default_ipv4']['address'] is defined   with_items: '{{groups.all}}' - name: Disable transparent huge page - defrag   shell: echo "never" > /sys/kernel/mm/transparent_hugepage/defrag - name: Disable transparent huge page - enabled   shell: echo "never" > /sys/kernel/mm/transparent_hugepage/enabled - name: VM swappiness - 1   shell: echo "1" > /proc/sys/vm/swappiness - name: Set VM swappiness - 2   sysctl:     name: vm.swappiness     value: 1     state: present - name: Create /data dir   file:     path: /data     state: directory     mode: 0775     owner: dbi     group: dbi - name: Create file system on volume   filesystem:     fstype: ext4     dev: /dev/xvdb - name: Mount volume as /data   mount:     name: /data     src: /dev/xvdb     fstype: ext4     opts: defaults,noatime     state: mounted - name: install the latest version of ntp   yum:     name: ntp     state: latest - name: install the latest version of nscd   yum:     name: nscd     state: latest - name: install wget   yum:     name: wget     state: latest - name: Disable SELinux   selinux:     state: disabled - name: Reboot for SELinux if needed   command: /sbin/shutdown -r +1   async: 0   poll: 0
    Java installation

    The Java installation is one of the most complex parts of the installation. First, we need to choose a supported version of JDK. Then we need to be sure that Java has been installed properly in all hosts. The installation tasks is split into the following part:

    • Create installation directories: /usr/share/java and /usr/java
    • Download Java JDK 1.7.80 which is a supported version for Cloudera Manager
    • Unarchive Java JDK
    • Fix ownership
    • Make Java available for the system with alternatives
    • Clean up installation download folder
    • Add Java home path by exporting $JAVA_HOME variable

    Below the java install tasks.

    
    
    ---
    - name: Create directories
      file:
        path: "{{ item }}"
        state: directory
      with_items:
        - "{{ java_download_folder }}"
        - "/usr/share/java"
    
    - name: Creates directory
      file:
        path:  "{{ java_download_folder }}"
        state: directory
    
    
    - name: Download Java
      get_url:
        url: "{{ java_download_url }}"
        dest: "{{ java_archive }}"
        headers: "Cookie:' gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie'"
        validate_certs: no
    
    - name: Unarchive Java archive
      unarchive:
        src: "{{ java_archive }}"
        dest: "{{ java_download_folder }}"
        copy: no
    
    - name: Fix ownership
      file:
        state: directory
        path: "{{ java_name }}"
        owner: root
        group: root
        recurse: yes
    
    - name: Make Java available for system with alternatives
      command: 'alternatives --install "/usr/bin/java" "java" "{{java_name}}/bin/java" 2'
    
    - name: Clean up Java download
      file:
        state: absent
        path: "{{java_archive}}"
    
    - name: Add java home path
      blockinfile:
        dest: /etc/profile
        block: |
          export JAVA_HOME=/usr/java/jdk1.7.0_80
          export PATH=$JAVA_HOME/bin:$PATH
          regexp: "JAVA_HOME"
        state: present
    MariaDB installation

    After installing Java, we can start the installation and configuration of MariaDB database. You can find the entire role for MariaDB installation here.

    MySQL connector

    MySQL connector installation steps will follow approximatively the same steps as Java installation. All details here.

    Cloudera Manager Server installation

    The last role of this playbook is the installation of Cloudera Manager server. This role will simply install the Cloudera Manager server package in the cdh_manager host and start the 2 following deamons:

    • cloudera-manager-daemons
    • cloudera-manager-server
    ---
    - include_vars: ../../../group_vars/db_server.yml
    
    - name: Install the Cloudera Manager Server Packages
      yum:
        name: "{{ item }}"
        state: installed
      with_items:
        - cloudera-manager-daemons
        - cloudera-manager-server
    
    # - name: Prepare Cloudera Manager Server External Database
    #   command: /usr/share/cmf/schema/scm_prepare_database.sh
    #              -f
    #              --host {{ hostvars[db_hostname]['inventory_hostname'] }}
    #              mysql {{ databases.scm.name }} {{ databases.scm.user }} {{ databases.scm.pass }}
    #   changed_when: False
    
    - name: Start the Cloudera Manager Server
      service:
        name: "{{ item }}"
        state: restarted
        enabled: yes
      notify:
        - wait cloudera-scm-server
      with_items:
        - cloudera-scm-server
        - cloudera-scm-agent
    
    # Trigger handler to wait for SCM to startup
    - meta: flush_handlers

     

    site.yml

    After creating all roles, we need to define our site.yml in order to execute all tasks in the desired order.

    ---
    # Cloudera playbook
    
    - name: Configure Cloudera Manager Repository
      become: ansible_become
      hosts: cdh_servers
      roles:
        - cm_repo
      tags: cm_repo
    
    - name: Configure Epel repository
      become: ansible_become
      hosts: cdh_servers
      roles:
        - epel
      tags: epel_repo
    
    - name: OS Configuration
      become: ansible_become
      hosts: cdh_servers
      roles:
          - os_config
      tags: os_config
    
    - name: Install Java JDK 7
      become: ansible_become
      hosts: cdh_servers
      roles:
        - java
      tags: java
    
    - name: Install MySQL Java Connector
      become: ansible_become
      hosts: cdh_servers
      roles:
        - mysql_connector
      tags: mysql_java_connector
    
    - name: Install MariaDB and create databases
      hosts: db_server
      roles:
        - mariadb
      tags: mysql
    
    # ##############
    - name: Install Cloudera Manager Agents
      hosts: cdh_servers
      roles:
        - cm_agents
      tags: cm_agents
    
    - name: Install Cloudera Manager Server
      hosts: cdh_manager
      roles:
        - scm
      tags: cluster_template

     

    When all steps will finish, you can access to Cloudera Manager web interface by the following:

    http://<cdh_manager_ip>:7180

    Be sure, your network configuration is well configured to allow access to Cloudera Manager webUI through the default 7180 port.

    Cloudera-Manager

    The entire project with all files is available here.

     

    Cet article Deploy a Cloudera cluster with Terraform and Ansible in Azure – part 2 est apparu en premier sur Blog dbi services.

    ADWC: start/stop with PSM Command Line Interface

    Tue, 2018-05-01 23:10

    In the previous post, I explained how to create an Autonomous Data Warehouse with PSM (PaaS Service Manager Command Line Interface). The most common operation you want to do with it is starting and stopping the service. This is the best way to save credits for hourly billed services. And PSM is the easiest: run from everywhere (it is Python 3) and no need to provide credentials each time. In the previous post, I explained how to setup PSM for the ADWC service.

    Unfortunately, for starting and stopping the instance you may realize that:

    • It is not in the documentation
    • Syntax exists but doesn’t work


    The documentation is there, but no mention of start-service, stop-service nor restart-service: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/adwc-commands.html

    The online help has start/stop/restart-service:

    $ psm adwc h
     
    DESCRIPTION
    Oracle Autonomous Data Warehouse Cloud
     
    SYNOPSIS
    psm ADWC [parameters]  
    AVAILABLE COMMANDS
    o services
    List all Autonomous Data Warehouse Cloud instances
    o service
    List Autonomous Data Warehouse Cloud instance
    o create-service
    Provision Autonomous Data Warehouse
    o delete-service
    Unprovision Autonomous Data Warehouse
    o scale-service
    Scale Autonomous Data Warehouse
    o start-service
    This operation will set the operational state of service as started
    o stop-service
    This operation will set the operational state of service as stopped
    o restart-service
    This operation will set the operational state of service as after...
    o view-backups
    List all backups of Autonomous Data Warehouse Cloud instance
    o view-backup
    List a backup of Autonomous Data Warehouse Cloud instance
    o backup
    Backup Autonomous Data Warehouse
    o view-restores
    List all restore operations for Autonomous Data Warehouse Cloud instance
    o view-restore
    List a specified restore operation for Autonomous Data Warehouse Cloud...
    o restore
    Restore Autonomous Data Warehouse
    o check-health
    Health Check operation
    o operation-status
    View status of Autonomous Data Warehouse Cloud instance operation
    o activities
    View activities for Autonomous Data Warehouse Cloud instance
    o help
    Show help

    All 3 take the same parameters, the service name, the REST API output format, and a boolean for wait of the completion of the job:

    $ psm adwc start-service h
     
    DESCRIPTION
    This operation will set the operational state of service as started
     
    SYNOPSIS
    psm ADWC start-service [parameters] -s, --service-name
    [-of, --output-format ] [-wc, --wait-until-complete ]  
    AVAILABLE PARAMETERS
    -s, --service-name (string)
    Name of the Autonomous Data Warehouse Cloud instance
     
    -of, --output-format (string)
    Desired output format. Valid values are [short, json, html]  
    -wc, --wait-until-complete (boolean)
    Wait until the command is complete. Valid values are [true, false]. Default is
    'false'.
     
    EXAMPLES
    psm ADWC start-service -s ExampleInstance

    But…

    So, the online help show it and I try it:

    $ psm adwc start-service --service-name ADWC --output-format short -wc true
     
    Error: Not Found. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Draft//EN">
    <HTML>
    <HEAD>
    <TITLE>Error 404--Not Found</TITLE>
    </HEAD>
    <BODY bgcolor="white">
    <FONT FACE=Helvetica><BR CLEAR=all>
    <TABLE border=0 cellspacing=5><TR><TD><BR CLEAR=all>
    <FONT FACE="Helvetica" COLOR="black" SIZE="3"><H2>Error 404--Not Found</H2>
    </FONT></TD></TR>
    </TABLE>
    <TABLE border=0 width=100% cellpadding=10><TR><TD VALIGN=top WIDTH=100% BGCOLOR=white><FONT FACE="Courier New"><FONT FACE="Helvetica" SIZE="3"><H3>From RFC 2068 <i>Hypertext Transfer Protocol -- HTTP/1.1</i>:</H3>
    </FONT><FONT FACE="Helvetica" SIZE="3"><H4>10.4.5 404 Not Found</H4>
    </FONT><P><FONT FACE="Courier New">The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.</p><p>If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.</FONT></P>
    </FONT></TD></TR>
    </TABLE>
     
    </BODY>
    </HTML>

    Unfortunately, this doesn’t work. Is it that those commands are not supported yet, reason why we don’t find them in the documentation? Or maybe the opposite: they do not work and rather than fix them, they removed them from the documentation. One thing I’m 100% sure: start-service and stop-service are the most useful commands for a CLI giving easy access to an hourly billed and I want them to work. And it is Python, JSON and HTML – nothing hidden there.

    Hack Fix

    The error message is about no matching URL. PSM metadata is stored in your user directory (~/.psm/data on Linux) with one JSON file for each Oracle platform service. Having a look at the URLs in ADWC.json the bug is obvious:

    $ jq . ~/.psm/data/ADWC.json | grep uri
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/scale",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/start",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/stop",
    "uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restart",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups/{backupId}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups/{jobId}",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
    "uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/healthcheck",
    "uri": "/paas/api/v1.1/activitylog/{identityDomainId}/job/{jobId}",
    "uri": "/paas/api/v1.1/activitylog/{identityDomainId}/filter",

    Let’s remove this ‘/core’ from the uri:

    sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json

    And run again the start-service:

    $ psm adwc start-service -s ADWC -wc true
     
    Message: Job submitted successfully for start of service/system
    Job ID: 25617877
    Waiting for the job to complete... (it cannot be cancelled)

    Here it is.

    As my laptop is on Windows where I use Cygwin, I have setup two icons with:
    C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc stop-service -s ADWC -wc true
    C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc start-service -s ADWC -wc true

    Start/Stop time

    CaptureStartStop
    This ADWC service is a PDBaaS. Starting and Stopping is as easy as opening and closing a pluggable database. Here are the timestamps after starting and stoppin in a loop (with graph on 24 loops).

    The startup time is around 30 seconds. The stop time is about a minute. Really easy to use.


    $ psm adwc activities -s ADWC -l 50
    Operation Type Status Start Time End Time
    START_SERVICE SUCCEED 2018-04-29T14:30:36.888+0000 2018-04-29T14:31:21.563+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:27:26.551+0000 2018-04-29T14:27:35.610+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:25:22.172+0000 2018-04-29T14:25:51.586+0000
    START_SERVICE SUCCEED 2018-04-29T14:20:47.957+0000 2018-04-29T14:21:38.131+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:08:09.409+0000 2018-04-29T14:08:48.125+0000
    START_SERVICE SUCCEED 2018-04-29T14:07:24.892+0000 2018-04-29T14:08:08.244+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:04:57.566+0000 2018-04-29T14:05:27.458+0000
    START_SERVICE SUCCEED 2018-04-29T14:03:51.035+0000 2018-04-29T14:04:34.108+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:03:17.701+0000 2018-04-29T14:03:47.262+0000
    START_SERVICE SUCCEED 2018-04-29T14:02:00.944+0000 2018-04-29T14:02:50.978+0000
    STOP_SERVICE SUCCEED 2018-04-29T14:00:56.990+0000 2018-04-29T14:01:29.567+0000
    START_SERVICE SUCCEED 2018-04-29T13:59:52.898+0000 2018-04-29T14:00:39.373+0000
    STOP_SERVICE SUCCEED 2018-04-29T13:59:19.380+0000 2018-04-29T13:59:49.011+0000
    START_SERVICE SUCCEED 2018-04-29T13:58:15.594+0000 2018-04-29T13:58:58.937+0000
    STOP_SERVICE SUCCEED 2018-04-29T13:57:42.355+0000 2018-04-29T13:58:11.845+0000
    ...

    Easy command line without having to provide a password interactively, wait for completion, fast operation, this gives a great user experience for this service. The only problem is when you play with several cloud accounts. I’ll show an idea in the next post.

     

    Cet article ADWC: start/stop with PSM Command Line Interface est apparu en premier sur Blog dbi services.

    ADWC: Creation of Autonomous Database Cloud service

    Mon, 2018-04-30 23:00

    You want to try the Autonomous Database Cloud Service? That’s easy. Here is a Step-by-Step.

    Cloud Credits

    CaptureOCIcredits
    First, you need Cloud Credits. You may have bought them (any recent negotiation with Oracle Sales, even for on-premises, involves some Cloud Credits). You can have a free trial with 300$ Cloud Credits available for 1 month. To get another month, you need a different e-mail address and different Credit Card number (not charged). It is quite easy to have different e-mail addresses and your bank may provide virtual credit card where the number changes each time. Or you may have the 5000$ Cloud Credits available for 1 year from the Education program. I got those thanks to ACE Director program.

    Update 01-MAY-2018 – There’s also the 500$ credits from the “white glove” program – you can ask to your Sales representative

    In all cases you will be able to test the service without spending too much credits because:

    • This service is not expensive ($2.5 per OCPU per Hour in Pay As You Go)
    • It is very easy to start and stop the service, and then pay only for the hours where you connect
    • If you choose ‘Bring You Own License’ in the creation, the per OCPU per Hour is only $0.48 (but be sure that you have covered
      See https://cloud.oracle.com/en_US/datawarehouse/pricing)
    • Capturebilling

    • And finally, during the trial promotion, the credits are consumed at discounted rate
      (after 9 hours of usage, I got less than 1$ used)
    OCI Account

    CaptureOCIaccount
    The first generation of Oracle Cloud, is now called ‘OCI Classic’, and you distinguish it when connecting as the Sign-In page mentions ‘Traditional Cloud Account’. You cannot access to ADWC with this account.

    You need an access to the OCI (Oracle Cloud Infrastructure – the version 2 of Oracle Cloud).
    If, when Sign-In, you are welcomed by this guy looking at his phone, you are at the right place. I’m always curious about how they choose an image for a page used every day and for several years. The oracle.com login page is easy with the headquarters blurry shot. For the OCI account, they choose the “Man On Smart Phone – Young Business Man Texting In Airport – Casual Urban Professional Businessman Using Smartphone App Smiling Happy Inside Office Building Or Airport” from the Adobe image stock.

    Ashburn

    CaptureOCIregion
    For the moment, the ADWC service is available only in the Ashburn Cloud Center. Not yet in Europe (but planned for Frankfurt). You can see the regions here: https://cloud.oracle.com/data-regions. Then, when you receive your access to the Oracle Cloud Services, chose the Ashburn Data Center.

    Update 01-MAY-2018 – It seems that the service is available in Frankfurt.

    Create Instance

    CaptureOCIcreateCaptureOCIcreate2
    The instance creation is easy and fast. It will create a Pluggable Database (PDB) in the Oracle Cloud CDB. You provide a name, and ADMIN password (be careful, rule is at least 12 characters) which is the password you’ll use to connect as the ADMIN user. You can change it later and add new users. The Shape is different from the DBaaS here. You define the number of threads you want to use (it actually sets the CPU_COUNT for the PDB) and the size of PDB datafiles. You can change both later with Scale Up/Down.

    PaaS Service Manager Command Line Interface

    You can also create an ADWC service from the command line. I’ll show how to install and use PSM, the PaaS Service Manager Command Line Interface). Rodrigo Jorge has a nice description for DBaaS on his blog.

    So, you download PSM:

    curl -X GET -u my.cloud.account@me.com:MyP@ssw0rd -H X-ID-TENANT-NAME:idcs-31bbd63c3cb9466cb8a96f627b6b6116 https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/idcs-31bbd63c3cb9466cb8a96f627b6b6116/client -o psmcli.zip
     
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
    100 86945 0 86945 0 0 16806 0 --:--:-- 0:00:05 --:--:-- 23820

    CaptureOCIurl

    The user:password are those you use in the account Sign-In.

    The ‘Tenant Name’, you get it from the URL of this Man On Smart Phone Sign-in web page. You will see it also mentioned later as ‘Identity domain’ (like in OCI-Classic). If you have a doubt, create the service from the web console, click on it and you will see the Tenant Name.

    CapturePIP3
    So, you have a zip file and do not unzip it. It is a Python 3 module and you install it with ‘pip3′. You can do that in any OS.

    I have the strange idea to run my laptop on Windows with Cygwin for command line stuff. Here are the python3 packages I have here.

    Here is the installation of PDM:

    pip3 install -U psmcli.zip
     
    Processing ./psmcli.zip
    ...

    And now the nice thing is that you will configure once your credentials with ‘psm setup’. You provide the user, password and tenant name (which is called ‘identity domain’ here):

    $ psm setup
     
    Username: my.cloud.account@me.com
    Password: MyP@ssw0rd
    Retype Password: MyP@ssw0rd
    Identity domain: idcs-31bbd63c3cb9466cb8a96f627b6b6116
    Region [us]:
    Output format [short]:
    Use OAuth? [n]:
    ----------------------------------------------------
    'psm setup' was successful. Available services are:
     
    o ADWC : Oracle Autonomous Data Warehouse Cloud
    o ADWCP : Oracle Autonomous Data Warehouse Cloud Platform
    o ANALYTICS : Oracle Analytics Cloud
    o APICS : Oracle API Platform Cloud Service
    o APICatalog : Oracle API Catalog Service
    ...

    ADWC is on the list. You are ready to manage ADWC instances, such as create one:


    $ psm adwc create-service -c - <<<' {
    "serviceName": "ADWCx",
    "adminPassword": "Ach1z00dAch1",
    "numCpus": "1",
    "storageCapacity": "1",
    "serviceLevel": "PAAS",
    "serviceVersion": "18.1.1",
    "managedSystemType": "oracle",
    "enableNotification": true,
    "notificationEmail": "notifocations@me.com",
    "isBYOL": true
    } '
     
    Message: Submitted job to create service [ADWCx] in domain [idcs-31bbd63c3cb9466cb8a96f627b6b6116].
    Job ID: 25509908

    We can check the status of job
    $ psm adwc activities --service-name ADWC
    Operation Type Status Start Time End Time
    CREATE_SERVICE RUNNING 2018-04-28T19:57:31.056+0000 N/A

    And a few minutes later the service is there:
    $ psm adwc activities --service-name ADWC
    Operation Type Status Start Time End Time
    CREATE_SERVICE SUCCEED 2018-04-28T19:57:31.056+0000 2018-04-28T19:59:51.900+0000

    We will see how to connect in a future post. Very easy from SQL Developer or SQLcl.

    You can delete the service when you don’t need it anymore:

    psm adwc delete-service --service-name ADWC

    To save credits, you want an easy way to stop and start the service. That’s for tne next post as PSN requires a little hack there.

     

    Cet article ADWC: Creation of Autonomous Database Cloud service est apparu en premier sur Blog dbi services.

    Oracle 18c clone PDB and Transparent Data Encryption

    Mon, 2018-04-30 10:02

    Let’s do some tests with the Oracle 18c new feature in creating PDB clone with DBCA. Unfortunately, this feature does not work when you have TDE enabled.

    Just to remember, with Oracle 12.2 we had the possibility to create PDBs with dbca just from PDBseed to from unplugged PDBs:

    psi1

    Now in version 18c we can create PDBs from existing PDBs as follows (for this test TDE is disabled):

    psi2

    You can choose the Pluggable Database you want to clone.

    psi3

    You select the name of your cloned PDB, and in less than 1 minute your original PDB is cloned:

    oracle@localhost:/u00/app/oracle/oradata/PSI/ [DB18] sq
    SQL*Plus: Release 18.0.0.0.0 Production on Mon Apr 30 12:55:54 2018
    Version 18.1.0.0.0
    Copyright (c) 1982, 2017, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
    Version 18.1.0.0.0
    
    SQL> show pdbs
       CON_ID CON_NAME         OPEN MODE  RESTRICTED
    
          2   PDB$SEED         READ ONLY      NO
    
          3   PDB1             READ WRITE     NO
    
          5   PDBNEW           READ WRITE     NO

     Let’s make some tests with PDBs and TDE.

    In Oracle 18c, it is no more mandatory to configure the sqlnet.ora file, we only have to define wallet_root and tde_configuration as follows:

    SQL> alter system set wallet_root='/u00/app/oracle/admin/DB18/wallet_cdb' scope=spfile;
    SQL> startup force;
    ORACLE instance started.
     Total System Global Area 1677717664 bytes
    Fixed Size          8896672 bytes
    Variable Size         520093696 bytes
    Database Buffers     1140850688 bytes
    Redo Buffers            7876608 bytes
    Database mounted.
    Database opened.
    
    SQL> alter system set tde_configuration="keystore_configuration=file" scope=both;
    System altered.

    We create a management key in the CDB

    SQL> administer key management create keystore identified by manager_cdb;
    keystore altered.

    The wallet file is created:

    SQL> !ls /u00/app/oracle/admin/DB18/wallet_cdb/tde
    ewallet.p12

    We open the keystore for the CDB and the PDBs:

    SQL> administer key management set keystore open identified by manager_cdb 
    container=all;
    keystore altered.

    We check in the pluggable database:

    SQL> alter session set container=pdb1;
    Session altered.
    SQL> select  status from v$encryption_wallet;
    STATUS
    OPEN_NO_MASTER_KEY

    But we receive open_no_master_key …

    We return to the CDB:

    SQL> connect / as sysdba
    Connected.
    SQL> administer key management set key identified by manager_cdb with backup;
    keystore altered.

    In the PDB:

    SQL> alter session set container=pdb1;
     
    Session altered.
     
    SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY manager_cdb with backup;
     
    keystore altered.
     
    SQL> select status from v$encryption_wallet;
     
    STATUS
     
    OPEN

    The keystore is now opened in the PDB, we can encrypt the data. With Oracle 18c there are two modes: united (the CDB owns the keystore for itself and the PDBs) or isolated (the PDB has its own keystore). In our case we are in united mode, let’s see if we can clone the PDB.

    SQL> SELECT KEYSTORE_MODE FROM V$ENCRYPTION_WALLET;
    
    KEYSTORE
    
    UNITED
    
    

    We do the same operations as previously, but the assistant is asking us for the keystore password:

    psi4

    By looking at oracle error messages, we can find a similar error on PDB switchover : “Metalink Note 2378945.1: “We only support this with auto login wallet”

    So I decided to implement auto login in my configuration and try to clone my PDB:

    SQL> administer key management create local auto_login keystore from keystore '/u00/app/oracle/admin/DB18/wallet_cdb/tde' identified by manager_cdb;
    
    SQL> startup force;
    ORACLE instance started.
    
    Total System Global Area 1677717664 bytes
    Fixed Size		    8896672 bytes
    Variable Size		  520093696 bytes
    Database Buffers	 1140850688 bytes
    Redo Buffers		    7876608 bytes
    Database mounted.
    Database opened.

    My PDB TDE configuration is in auto login mode:

    SQL> select wrl_type,status, wallet_type from v$encryption_wallet;
    
    WRL_TYPE	     STATUS			    WALLET_TYPE
    FILE		     OPEN			    LOCAL_AUTOLOGIN

    But even if TDE is implemented in auto login mode, the PDB clone operation fails with the same ORA-46697 error message.

    We also encounter this bad behavior with the 18c new features about PDBs snapshot, which allows to create PDBs snapshots manually or automatically:

    SQL> create pluggable database snap_pdb1 from pdb1
      2  file_name_convert = ('snap_pdb1', 'pdb1')
      3* snapshot mode every 60 minutes
    create pluggable database snap_pdb1 from pdb1
    *
    ERROR at line 1:
    ORA-46697: Keystore password required.
    
    

    Cloning PDBs is a very useful tool in order to realize mass deployment to development teams, it should be nice to make it work with TDE enabled.

    
    
     

    Cet article Oracle 18c clone PDB and Transparent Data Encryption est apparu en premier sur Blog dbi services.

    Pages