OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 10 hours 30 min ago

Pizza, Beer, and Dev Expertise at Your Local Meet-up

Wed, 2018-05-16 06:30

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work.

Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities.

One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May.

Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.”

Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.”

For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says.

What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!

 

The Panelists Jon-Petter Hjulstad
Department Manager, SYSCO AS
Twitter LinkedIn   
Ruben Rodriguez Santiago
Java, ADF, and Cloud Solution Specialist, Avanttic
Twitter LinkedIn  
Luis Weir
CTO, Oracle DU, Capgemini
Twitter LinkedIn  
Phil Wilkins
Senior Consultant, Capgemini
Twitter LinkedIn  Additional Resources Coming Soon
  • What Developers Need to Know About API Monetization
  • Best Practices for API Development
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

 

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

Wed, 2018-05-09 15:55

In the April release of Oracle Developer Cloud Service we started supporting Docker and Terraform builds as part of the CI & CD pipeline. Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure.

About Packer

HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

You can read more about Packer on https://www.packer.io/

You can find the details of Packer support for Oracle Cloud Infrastructure here.

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning.

Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on.

Packer Scripts

To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development, so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Note: Ensure that the Git repository is created and you have the HTTPS URL for it.

Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service.

Description of the files:

oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key.

Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.

 

{ "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] }

You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region.

https://docs.us-phoenix-1.oraclecloud.com/images/

Now login into your OCI console to retrieve some of the details needed for the build.json definitions.

Below screenshot shows where you can retrieve your tenancy_ocid from.

Below screenshot of OCI console shows where you will find the compartment_ocid.

Below screenshot of OCI console shows where you will find the user_ocid.

You can retrieve the region and availability_domain as shown below.

Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen.

Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.

 

FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json

 

Configuring the Build VM

With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Build Job Configuration

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration.

On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot.

So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Packing!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Infrastructure as Code using Terraform on Oracle Developer Cloud

Wed, 2018-05-09 14:04

With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. With our April release, we have started supporting Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About Terraform

Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. Terraform can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" }

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } }

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } }

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" }

provider.tf: In the provider script the OCI details are set.

 

provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" }

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" }

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] }

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } }

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles avaibale for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Developer Cloud Service May Release Adds K8N, OCI, Code Editing and More

Tue, 2018-05-08 11:00

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases.

Here are some highlights of the new version:

Extended build server software

You can now create build jobs and pipelines that leverage:

  • Kubernetese - use the kubectl command line to manage your docker containers
  • OCI Command line - to automate provisioning and configuration of Oracle Compute 
  • Java 9 - for your latest java projects deployments
  • Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps

 

Build Server Software Options SSH Connection in Build

You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services.

In Browser Code Editing and Versioning 

A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages.

Code editing in the browser

PagerDuty Webhook

Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution.

Increased Reusability

We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team.

There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information.

Happy development!

A New Oracle Autonomous Visual Builder Cloud Service - Visual and Coding Combined

Mon, 2018-05-07 14:39

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in autonomous capabilities.

Over the past couple of years, the visual development approach of VBCS has made it a very attractive solution to citizen developers who leveraged the no-code required nature of the platform to build their custom applications.

Many professional developers also expressed interest in the visual development experience they saw, but they were looking for additional capabilities.

Specifically developers were demanding an option to have direct access to the code that the visual tools created so they can change it and enhance it with their own custom code to achieve richer behaviors.

With the new VBCS version we are addressing these demands adding direct access to manipulate code, while keeping the low-code characteristics of VBCS.

Visual and Code Based Development Combined

Just like in previous versions, constructing the UI is done through a visual WYSIWYG layout editor. Existing VBCS users will notice that they now have access to a much richer set of UI components in the component palette. In fact they now have access to all of the components offered by Oracle JET (Oracle's open-source JavaScript Extension Toolkit). In addition you can add more components to the palette using the Web-components standard based Oracle JET composite components architecture (CCA).

The thing to note about the visual editor is the new "Code" button at the top right, clicking this button will give professional developers direct access to the HTML code that makes up the page layout.  They'll be happy to discover that the code is pure HTML/JavaScript/CSS based - which will let them leverage their existing expertise to further enhance and customize it. Developers can directly manipulate that code through the smart code editor leveraging features such as code insight, syntax highlighting, doc access, and reformatting directly in their browser.

The visual development approach is not limited to page layouts. We extend it also to the way you can define business logic. Defining the flow of your logic is done through our new action flow editor. With a collection of operations that you can define in a declarative way, and the ability to invoke your specific JavaScript code for unique functionality.

Now that developers have direct access to the code, we also added integration with Git, leveraging the private Git repositories provided through Oracle Developer Cloud Service (DevCS). Teams can now leverage the full set of Agile methodology capabilities of DevCS when working on VBCS applications, including issue tracking, version management, agile planning and code review processes.

Mobile and Web Development Unified

With the new version of VBCS we further integrated the development experience across both web browser-based and on-device mobile applications. 

In the same project you can create both types of applications, leveraging the same development approach, application architecture, UI components, and access to custom business objects and external REST services.

Once you are done developing your mobile application, we'll package it for you as an on-device mobile app that you install, test, and run on your devices - leveraging the native look and feel provided by Oracle JET for the various mobile platforms.

Standard-Based Data Openness

With the new version you can now hook up VBCS to any REST data source with a few button clicks, leveraging a declarative approach to consuming external REST source in your application. VBCS is able to parse standard Swagger based service descriptors for easy consumption. Even if you don't have a detailed structure description for a service, the declarative dialog in VBCS makes it easy to define the access to any service, including security settings, header and URL parameters, and more. VBCS is smart enough to parse the structure returned from the service and create variables that will allow you to access the data in your UI with ease.

Let's not forget that VBCS also lets you define your own custom reusable business services. VBCS will create the database objects to store the information in these objects, and will provide you with a powerful secure set of REST services to allow you to access these objects from both your VBCS and external applications.

Visual Builder Cloud Service Goes Autonomous

Today’s Visual Builder Cloud Service release also has built-in autonomous capabilities to automate and eliminate repetitive tasks so you can instead focus on app design and development.

Configuring and provisioning your service is as easy as a single button click.All you need to do is tell us the name you want for your server, and with a click of a button everything is configured for you. You don't need to install and configure your underlying platform - the service automatically provision for you a database, an app hosting server, and your full development platform.

One click install

The new autonomous VBCS eliminates any manual tasks for the maintenance of your development and deployment platforms. Once your service is provisioned we'll take care of things like patching, updates, and backups for you.

Furthermore autonomous VBCS automatically maintains your mobile app publishing infrastructure. You just need to click a button and we'll publish your mobile app to iOS or Android packages, and host your web app on our scalable backend services that host your data and your applications.

But Wait There is More

There are many other new features you'll find in the new version of Oracle Visual Builder Cloud Service. Whether you are a seasoned JavaScript expert looking to accelerate your delivery, a developer taking your first steps in the wild world of JavaScript development, or a citizen developer looking to build your business application - Visual Builder has something for you.

So take it for a spin - we are sure you are going to enjoy the experience.

For more information and to get your free trial visit us at http://cloud.oracle.com/visual-builder

 

 

Oracle Dev Moto Tour 2018

Mon, 2018-05-07 14:00
 "Four wheels move the body. Two wheels move the soul."
 
The 2018 Developers Motorcycle Tour will start their engines on May 8th, rolling through Japan and Europe to visit User Groups, Java Day Tokyo and Code events. Join Stephen Chin, Sebastian Daschner, and other community luminaries to catch up on the latest technologies and products, as well as bikes, food, Sumo, football or anything fun. 
 
Streaming live from every location! Watch their sessions online at @OracleDevs and follow them for updates. For details about schedules, resources, videos, and more through May and June 2018, visit DevTours 
 
Japan Tour: May 2018
In May, the dev tour motorcycle team will travel to various events, including the Java Day Tokyo conference.  Meet Akihiro Nishikawa, Andres Almiray, David Buck, Edson Yanaga, Fernando Badapoulis, Ixchel Ruiz, Kirk Pepperdine, Matthew Gilliard, Sebastian Daschner, and Stephen Chin.
 
May 8, 2018 Kumamoto Kumamoto JUG
May 10, 2018 Fukuoka Fukuoka JUG
May 11, 2018 Okayama Okayama JUG
May 14, 2018 Osaka Osaka JUG
May 15, 2018 Nagoya Nagoya JUG
May 17, 2018 Tokyo Java Day Tokyo
May 18, 2018 Tokyo JOnsen
May 19, 2018 Tokyo JOnsen
May 20, 2018 Tokyo JOnsen
May 21, 2018 Sendai Sendai JUG
May 23, 2018 Sapporo JavaDo
May 26, 2018 Tokyo JJUG Event
 
The European Tour: June 2018
In June, the dev tour motorcycle team will travel to multiple European countries and cities to meet Java and Oracle developers. Depending on the city and the event, which will include the Code Berlin conference, you'll meet Fernando Badapoulis, Nikhil Nanivadekar, Sebastian Daschner, and Stephen Chin.
 
June 4, 2018 Zurich JUG Switzerland
June 5, 2018 Freiburg JUG Freiburg
June 6, 2018 Bodensee JUG Bodensee
June 7, 2018 Stuttgart JUG Stuttgart
June 11, 2018 Berlin JUG BB
June 12, 2018 Berlin Oracle Code Berlin
June 13, 2018 Hamburg JUG Hamburg
June 14, 2018 Hannover JUG Hannover
June 15, 2018 Münster JUG Münster
June 16, 2018 Köln / Colone JUG Cologne
June 17, 2018 Munich JUG Munich
 

Oracle Adds New Support for Open Serverless Standards to Fn Project and Key Kubernetes Features ...

Wed, 2018-05-02 02:01
li {line-height:1.7em;}

Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics.

Oracle Container Engine for Kubernetes tackles toughest real-world governance, scale, and management challenges facing K8s users today

Today at Kubecon + CloudNativeCon Europe 2018, Oracle announced new support for several open serverless standards on its open Fn Project and a set of critical new Oracle Container Engine for Kubernetes features addressing key real-world Kubernetes issues including governance, security, networking, storage, scale, and manageability.

Both the serverless and Kubernetes communities are at an important crossroads in their evolution, and to further its commitment to open serverless standards, Oracle announced that the Fn Project now supports standards-based projects CloudEvents and the Serverless Framework. Both projects are intended to create interoperable and community-driven alternatives to today’s proprietary serverless options.

Solving Real World Kubernetes Challenges

The New Stack, in partnership with the Cloud Native Computing Foundation (CNCF) recently published a report analyzing top challenges facing Kubernetes users today. The report found that infrastructure-related issues – specifically security, storage, and networking – had risen to the top, impacting larger companies the most.

  

 

Source: The New Stack

In addition, when evaluating container orchestration, classic non-functional requirements came into play: scaling, manageability, agility, and security. Solving these types of issues will help the Kubernetes project move through the Gartner Hype Cycle “Trough of Disillusionment”, up the “Slope of Enlightenment” and onto the promised land of the “Plateau of Productivity.”

Source: The New Stack

Addressing Real-World Kubernetes Challenges

To address these top challenges facing Kubernetes users today, Oracle Container Engine for Kubernetes has integrated tightly with the best-in-class governance, security, networking, and scale of Oracle Cloud Infrastructure (OCI). These are summarized below:

  • Governance, compliance, & auditing: Identity and Access Management (IAM) for Kubernetes enables DevOps teams to control who has access to Kubernetes resources, but also set policies describing what type of access they have and to which specific resources. This is a crucial element to managing complex organizations and rules applied to logical groups of users and resources, making it really simple to define and administer policies.

    • Governance: DevOps teams can set which users have access to which resources, compartments, tenancies, users, and groups for their Kubernetes clusters. Since different teams typically manage different resources through different stages of the development cycle – from development, test, staging, through production – role-based access control (RBAC) is crucial. Two levels of RBAC are provided: (1) at the OCI IaaS infrastructure resource level defining who can for example spin up a cluster, scale it, and/or use it, and (2) at a Kubernetes application level where fine-grained Kubernetes resource controls are provided.

  • Compliance: Container Engine for Kubernetes will support The Payment Card Industry Data Security Standard (PCI DSS), the globally applicable security standard that customers use for a wide range of sensitive workloads, including the storage, processing and transmission of cardholder data. DevOps teams will be able to run Kubernetes applications on Oracle’s PCI-compliant Cloud Infrastructure Services.

  • Auditing (logging, monitoring): Cluster management auditing events have also been integrated into the OCI Audit Service for consistent and unified collection and visibility.

  • Scale: Oracle Container Engine is a highly available managed Kubernetes service. The Kubernetes masters are highly available (cross availability domains), managed, and secured. Worker clusters are self-healing, can span availability domains, and can be composed of node pools consisting of compute shapes from VMs to bare metal to GPUs.

    • GPUs, Bare Metal, VMs: Oracle Container Engine offers the industry’s first and broadest family of Kubernetes compute nodes, supporting small and virtualized environments, to very large and dedicated configurations. Users can scale up from basic web apps up to high performance compute models, with network block storage and local NVMe storage options.

    • Predictable, High IOPS: The Kubernetes node pools can use either VMs or Bare Metal compute with predictable IOPS block storage and dense I/O VMs. Local NVMe storage provides a range of compute and capacities with high IOPS.

    • Kubernetes on NVIDIA Tesla GPUs: Running Kubernetes clusters on bare Metal GPUs gives container applications access to the highest performance possible. With no hypervisor overhead, DevOps teams should be delighted to have access to bare metal compute instances on Oracle Cloud Infrastructure with two NVIDIA Tesla P100 GPUs to run CUDA based workloads allowing for over 21 TFLOPS of single-precision performance per instance.

  • Networking: Oracle Container Engine is built on a state-of-the-art, non-blocking Clos network that is not over-subscribed and provides customers with a predictable, high-bandwidth, low latency network.

    • Load balancing: Load balancing is often one of the hardest features to configure and manage – Oracle has integrated seamlessly with OCI load balancing to allow container-level load balancing. Kubernetes load balancing checks for incoming traffic on the load balancer's IP address and distributes incoming traffic to a list of backend servers based on a load balancing policy and a health check policy. DevOps teams can define Load Balancing Policies that tell the load balancer how to distribute incoming traffic to the backend servers.

    • Virtual Cloud Network: Kubernetes user (worker) nodes are deployed inside a customer’s own VCN (virtual cloud network), allowing for secure management of IP addresses, subnets, route tables and gateways using the VCN.

  • Storage: Cracking the code on a simple way to manage Kubernetes storage continues to be a major concern for DevOps teams. There are two new IaaS Kubernetes storage integrations designed for Oracle Cloud Infrastructure that can help, unlocking OCI’s industry leading block storage performance (highest IOPS per GB of any standard cloud provider offering), cost, and predictability:

  • Simplified, Unified Management:

    • Bundled in Management: By bundling in commonly used Kubernetes utilities, Oracle Container Engine for Kubernetes makes for a familiar and seamless developer experience. This includes built-in support for Helm and Tiller (providing standard Kubernetes package management), the Kubernetes dashboard, and kube-dns.

    • Running Existing Applications with Kubernetes: Kubernetes supports an ever-growing set of workloads that are not necessarily net new greenfield apps. A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user.” Oracle has open-sourced and will soon generally release an Oracle WebLogic Server Kubernetes Operator which allows WebLogic users to manage WebLogic domains in a Kubernetes environment without forcing application rewrites, retesting and additional process and cost. WebLogic 12.2.1.3 has also been certified on Kubernetes, and the WebLogic Monitoring Exporter, which exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana, has been released and open sourced.

Fn Project: Open serverless initiatives are progressing within the CNCF and the Fn Project is actively engaged and supporting these emerging standards:

  • CloudEvents: The Fn Project has announced support for the Cloud Event standard effort. CloudEvents seeks to standardize event data and simplify event declaration and delivery among different applications, platforms, and providers. Until now, developers have lacked a common way of describing serverless events. This not only severely affects the portability of serverless apps but is also a significant drain on developer productivity.

  • Serverless Framework: Fn Project, an open source functions as a service and workflow framework, has contributed a FaaS provider to the Serverless Framework to further its mission of multi-cloud and on-premise serverless computing. The new provider allows users of the Serverless Framework to easily build and deploy container-native functions to any Fn Cluster while getting the unified developer experience they’re accustomed to. For Fn’s growing community, the integration provides an additional option for managing functions in a multi-cloud and multi-provider world.

    • "With a rapidly growing community around Fn, offering a first-class integration with the Serverless Framework will help bring our two great communities closer together, providing a “no lock-in” model of serverless computing to companies of all sizes from startups to the largest enterprises,” says Chad Arimura, VP Software Development, Oracle.

  • OpenCensus: Fn is now using OpenCensus stats, trace, and view APIs across all Fn code. OpenCensus is a single distribution of libraries that automatically collects traces and metrics from your app, displays them locally, and sends them to any analysis tool. OpenCensus has made good decisions in defining their own data formats that allow developers to use any backends (explicitly not having to create their own data structures simply for collection). This allows Fn to easily stay up to date in the ops world without continuously having to make extensive code changes.

For more information, join Chad Arimura and Matt Stephenson Friday, May 4 for their talk at KubeCon on Operating a Global Scale FaaS on top of Kubernetes.

 

A Quick Look At What's New In Oracle JET v5.0.0

Thu, 2018-04-26 12:35

The newest release of Oracle JET was delivered to the community on April 16th. Continuing the foundational concept of delivering a toolkit on a consistent and predictable release schedule that application developers can rely on. This is the 24th consecutive on-schedule release for Oracle JET.

 

oracle jet logo

This release is primarily a maintenance release, with updates to the underlying open source dependencies where needed, and quite a bit of housekeeping, with removal of previously deprecated API's.  As always, the Release Notes will provide the details and it's highly recommended that you take some time to read through the sections that describe the removed API's.  Most have been under deprecation notice for well over a year.  In some cases API's are being removed that had deprecation notices announced almost 4 years ago.  This will help keep things as clean and lightweight as possible going forward.

visual build call to action button example

One of the first things that you'll probably notice is the Home page now has an option to checkout the new Visual Builder Cloud Service. For those that are more familiar and comfortable with a declarative approach to web development, Visual Builder provides a very comprehensive drag and drop approach to developing JET based applications. If you find yourself in a position where you need to get down to the code while working in Visual Builder, the newest release now provides full code level development as well.  Just hit the Code button and you'll find yourself writing real JET code, with code completion, inline documentation and more. It's the same code that you see in the Cookbook and other sample applications today.

 

New ways to Get Started

The Get Started page has also received a bit of a face lift.  As the JET community continues to grow, there are more developers looking at JET for the first time, and providing multiple ways to get that first experience is important.  You'll now find that you can Get Started by using Visual Builder as described above, or take a quick look at how JET code is structured with a quick sample available on jsFiddle.  Of course the Command Line Interface (ojet-cli) is still the primary method for getting things off the ground with JET.

get started page screenshot

 

Growth and Success

The JET Community continues to grow at a rapid pace and we are proud to have three new Oracle Partners/Customers added to the Success Stories page in this release.  We also added a new Oracle Product which is providing tremendous opportunities for Cloud Startups.  Visit the Success Stories page to learn more about:

 

If you have a JET application, or your company is using JET and you'd like to be included on the JET Success Stories page, please drop a note in the JET Community Forums.

 

A Single Source of Truth for Resource Paths

The Oracle JET Command Line Interface itself has added a few new features in this release.  One of the most notable is the consolidation of resource path definitions into one configuration file.  If you have tried adding 3rd party libraries to a JET application in the past, you found yourself adding the path to those libraries in up to three different files take make sure things worked in both development as well as a production build of the application.  Everything is now in a single file called "path-mappings.json".  Checkout the Migration Chapter of the Developers Guide for details on how to work with this new single source of truth for paths.

path mapping file structure example

 

 

Composite Component Architecture(CCA) continues to mature

Composite Component Architecture(CCA) continues to be a major focus of Oracle JET and each release brings more enhancements to the metadata and structure of the overall Architecture. The best place to keep track of what is happening in CCA development, is on Duncan Mills' Blog series.  The latest installment covers changes made in the JET v5.0.0 release.

 

 

Theming gets an update

Theming has always been a significant feature of JET with the inclusion of SASS (.scss) files for the default Alta theme, themes for Android, iOS, and Windows platforms, as well as a Theme Builder application to help you build your own theme as needed.  In JET v5.0.0 the method for defining the base color scheme as been revised.  Take a look at the Theme Changes section of the Release Notes for details, as well as the Theme Builder example on the JET Website.

 

New task types in oj-Gantt

The Gantt chart has been gaining features over the last few releases, and with this release comes the ability to add new types of tasks such as a Summary and Milestone. Continue to watch this component over future releases as it matures to meet more and more use cases.

 

 

As always your comments and constructive feedback is welcome.  If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums, or follow @OracleJET on Twitter

On behalf of the entire JET development team, Happy Coding!!

 

Announcing the General Availability of MySQL 8.0

Thu, 2018-04-26 08:17

MySQL adds NoSQL and many new enhancements to the world’s most popular open source database:

  1. NoSQL Document Store gives developers the flexibility of developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database. 
  2. SQL Window functions, Common Table Expressions, NOWAIT and SKIP LOCKED, Descending Indexes, Grouping, Regular Expressions, Character Sets, Cost Model, and Histograms.
  3. JSON Extended syntax, new functions, improved sorting, and partial updates. With JSON table functions you can use the SQL machinery for JSON data.
  4. GIS Geography support. Spatial Reference Systems (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions.
  5. Reliability DDL statements have become atomic and crash safe, meta-data is stored in a single, transactional data dictionary 
  6. Observability Performance Schema, Information Schema, Invisible Indexes,  Error Logging.
  7. Manageability Persistent Configuration Variables, Undo tablespace management, Restart command, and New DDL.
  8. High Availability InnoDB Cluster delivers an integrated, native, HA solution for your databases.
  9. Security OpenSSL improvements, new default authentication, SQL Roles, breaking up the super privilege, password strength, authorization.
  10. Performance Up to 2x faster than MySQL 5.7.
Developer Features

MySQL 8.0 delivers many new features requested by developers in areas such as SQL, JSON and GIS. Developers also want to be able to store Emojis, thus UTF8MB4 is now the default character set in 8.0.

NoSQL Document Store

MySQL Document Store gives developers maximum flexibility developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database.  The MySQL Document Store provides multi-document transaction support and full ACID compliance for schema-less JSON documents.

SQL

Window Functions

MySQL 8.0 delivers SQL window functions in MySQL.   Similar to grouped aggregate functions, window functions perform some calculation on a set of rows, e.g. COUNT or SUM. But where a grouped aggregate collapses this set of rows into a single row, a window function will perform the aggregation for each row in the result set.

Window functions come in two flavors: SQL aggregate functions used as window functions and specialized window functions.

Common Table Expression

MySQL 8.0 delivers [Recursive] Common Table Expressions (CTEs) in MySQL.  Non-recursive CTEs can be explained as “improved derived tables” as it allow the derived table to be referenced more than once. A recursive CTE is a set of rows which is built iteratively: from an initial set of rows, a process derives new rows, which grow the set, and those new rows are fed into the process again, producing more rows, and so on, until the process produces no more rows.

MySQL Workbench Showing MySQL CTE and Windows Functions

MySQL CTE and Window Functions in MySQL Workbench 8.0

NOWAIT and SKIP LOCKED

MySQL 8.0 delivers NOWAIT and SKIP LOCKED alternatives in the SQL locking clause. Normally, when a row is locked due to an UPDATE or a SELECT ... FOR UPDATE, any other transaction will have to wait to access that locked row. In some use cases there is a need to either return immediately if a row is locked or ignore locked rows. A locking clause using NOWAIT will never wait to acquire a row lock. Instead, the query will fail with an error. A locking clause using SKIP LOCKED will never wait to acquire a row lock on the listed tables. Instead, the locked rows are skipped and not read at all.

Descending Indexes

MySQL 8.0 delivers support for indexes in descending order. Values in such an index are arranged in descending order, and we scan it forward. Before 8.0, when a user create a descending index, we created an ascending index and scanned it backwards. One benefit is that forward index scans are faster than backward index scans.

GROUPING

MySQL 8.0  delivers GROUPING(), SQL_FEATURE T433. The GROUPING() function distinguishes super-aggregate rows from regular grouped rows. GROUP BY extensions such as ROLLUP produce super-aggregate rows where the set of all values is represented by null. Using the GROUPING()function, you can distinguish a null representing the set of all values in a super-aggregate row from a NULL in a regular row.

JSON

MySQL 8.0 adds new JSON functions and improves performance for sorting and grouping JSON values.

Extended Syntax for Ranges in JSON path expressions

MySQL 8.0 extends the syntax for ranges in JSON path expressions. For example SELECT JSON_EXTRACT('[1, 2, 3, 4, 5]', '$[1 to 3]');results in [2, 3, 4]. The new syntax introduced is a subset of the SQL standard syntax, described in SQL:2016, 9.39 SQL/JSON path language: syntax and semantics.

JSON Table Functions

MySQL 8.0 adds JSON table functions which enables the use of the SQL machinery for JSON data. JSON_TABLE() creates a relational view of JSON  data. It maps the result of a JSON data evaluation into relational rows and columns. The user can query the result returned by the function as a regular relational table using SQL, e.g. join, project, and aggregate.

JSON Aggregation Functions

MySQL 8.0 adds the aggregation functions JSON_ARRAYAGG() to generate JSON arrays and JSON_OBJECTAGG() to generate JSON objects . This makes it possible to combine JSON documents in multiple rows into a JSON array or a JSON object.

JSON Merge Functions

The JSON_MERGE_PATCH() function implements the semantics of JavaScript (and other scripting languages) specified by RFC7396, i.e. it removes duplicates by precedence of the second document. For example, JSON_MERGE('{"a":1,"b":2 }','{"a":3,"c":4 }'); # returns {"a":3,"b":2,"c":4}.

JSON Improved Sorting

MySQL 8.0 gives better performance for sorting/grouping JSON values by using variable length sort keys. Preliminary benchmarks shows from 1.2 to 18 times improvement in sorting, depending on use case.

JSON Partial Update

MySQL 8.0 adds support for partial update for the JSON_REMOVE()JSON_SET() and JSON_REPLACE() functions.  If only some parts of a JSON document are updated, we want to give information to the handler about what was changed, so that the storage engine and replication don’t need to write the full document.

GIS

MySQL 8.0 delivers geography support. This includes meta-data support for Spatial Reference System (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions.

Character Sets

MySQL 8.0 makes UTF8MB4 the default character set. UTF8MB4 is the dominating character encoding for the web, and this move will make life easier for the vast majority of MySQL users.

Cost Model

Query Optimizer Takes Data Buffering into Account

MySQL 8.0 chooses query plans based on knowledge about whether data resides in-memory or on-disk. This happens automatically, as seen from the end user there is no configuration involved. Historically, the MySQL cost model has assumed data to reside on spinning disks. The cost constants associated with looking up data in-memory and on-disk are now different, thus, the optimizer will choose more optimal access methods for the two cases, based on knowledge of the location of data.

Optimizer Histograms

MySQL 8.0 implements histogram statistics. With Histograms, the user can create statistics on the data distribution for a column in a table, typically done for non-indexed columns, which then will be used by the query optimizer in finding the optimal query plan. The primary use case for histogram statistics is for calculating the selectivity (filter effect) of predicates of the form “COLUMN operator CONSTANT”.

Reliability

Transactional Data Dictionary

MySQL 8.0 increases reliability by ensuring atomic, crash safe DDL, with the transactional data dictionary. With this the user is guaranteed that any DDL statement will either be executed fully or not at all. This is particularly important in a replicated environment, otherwise there can be scenarios where masters and slaves (nodes) get out of sync, causing data-drift.

Observability

Information Schema (speed up)

MySQL 8.0 reimplements Information Schema. In the new implementation the Information Schema tables are simple views on data dictionary tables stored in InnoDB. This is by far more efficient than the old implementation with up to 100 times speedup.

Performance Schema (speed up)

MySQL 8.0 speeds up performance schema queries by adding more than 100 indexes on performance schema tables. 

Manageability

INVISIBLE Indexes

MySQL 8.0 adds the capability of toggling the visibility of an index (visible/invisible). An invisible index is not considered by the optimizer when it makes the query execution plan. However, the index is still maintained in the background so it is cheap to make it visible again. The purpose of this is for a DBA / DevOp to determine whether an index can be dropped or not. If you suspect an index of not being used you first make it invisible, then monitor query performance, and finally remove the index if no query slow down is experienced.

High Availability

MySQL InnoDB Cluster delivers an integrated, native, HA solution for your databases. It tightly integrates MySQL Server with Group Replication, MySQL Router, and MySQL Shell, so you don’t have to rely on external tools, scripts or other components.

Security features

OpenSSL by Default in Community Edition

MySQL 8.0 is unifying on OpenSSL as the default TLS/SSL library for both MySQL Enterprise Edition and MySQL Community Edition. 

SQL roles

MySQL 8.0 implements SQL Roles. A role is a named collection of privileges. The purpose is to simplify the user access right management. One can grant roles to users, grant privileges to roles, create roles, drop roles, and decide what roles are applicable during a session.

Performance

MySQL 8.0 is up to 2x faster than MySQL 5.7.  MySQL 8.0 comes with better performance for Read/Write workloads, IO bound workloads, and high contention “hot spot” workloads.

Scaling Read/Write Workloads

MySQL 8.0 scales well on RW and heavy write workloads. On intensive RW workloads we observe better performance already from 4 concurrent users  and more than 2 times better performance on high loads comparing to MySQL 5.7. We can say that while 5.7 significantly improved scalability for Read Only workloads, 8.0 significantly improves scalability for Read/Write workloads.  The effect is that MySQL improves  hardware utilization (efficiency) for standard server side hardware (like systems with 2 CPU sockets). This improvement is due to re-designing how InnoDB writes to the REDO log. In contrast to the historical implementation where user threads were constantly fighting to log their data changes, in the new REDO log solution user threads are now lock-free, REDO writing and flushing is managed by dedicated background threads, and the whole REDO processing becomes event-driven. 

Utilizing IO Capacity (Fast Storage)

MySQL 8.0 allows users to use every storage device to its full power. For example, testing with Intel Optane flash devices we were able to deliver 1M Point-Select QPS in a fully IO-bound workload.

Better Performance upon High Contention Loads (“hot rows”)

MySQL 8.0 significantly improves the performance for high contention workloads. A high contention workload occurs when multiple transactions are waiting for a lock on the same row in a table,  causing queues of waiting transactions. Many real world workloads are not smooth over for example a day but might have bursts at certain hours. MySQL 8.0 deals much better with such bursts both in terms of transactions per second, mean latency, and 95th percentile latency. The benefit to the end user is better hardware utilization (efficiency) because the system needs less spare capacity and can thus run with a higher average load.

MySQL 8.0 Enterprise Edition

For mission critical applications, MySQL Enterprise Edition provides the following additional capabilities:

  • MySQL Enterprise Backup for full, incremental and partial backups, Point-in-Time Recovery and backup compression.
  • MySQL Enterprise High Availability for integrated, native, HA with InnoDB Cluster.
  • MySQL Enterprise Transparent Data Encryption (TDE) for data-at-rest encryption.
  • MySQL Enterprise Encryption for encryption, key generation, digital signatures and other cryptographic features.
  • MySQL Enterprise Authentication for integration with existing security infrastructures including PAM and Windows Active Directory.
  • MySQL Enterprise Firewall for real-time protection against database specific attacks, such as an SQL Injection.
  • MySQL Enterprise Audit for adding policy-based auditing compliance to new and existing applications.
  • MySQL Enterprise Monitor for managing your database infrastructure.
  • Oracle Enterprise Manager for monitoring MySQL databases from existing OEM implementations.
MySQL Cloud Service

Oracle MySQL Cloud Service is built on MySQL Enterprise Edition and powered by Oracle Cloud, providing an enterprise-grade MySQL database service. It delivers the best in class management tools, self service provisioning, elastic scalability and multi-layer security.

Resources

JavaOne Event Expands with More Tracks, Languages and Communities – and New Name

Thu, 2018-04-19 11:00

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, and R along with more of the great Java technical content that developers have come to expect. We’re calling the new event Oracle Code One, October 22-25 at Moscone West in San Francisco.

Oracle Code One will include a Java technical keynote with the latest information on the Java platform from the architects of the Java team.  It will also have the latest details on Java 11, advances in OpenJDK, and other core Java development.  We are planning dedicated tracks for server side Java EE technology including Jakarta EE (now part of the Eclipse Foundation), Spring, and the latest advances in Java microservices and containers.  Also a wealth of community content on client development, JVM languages, IDEs, test frameworks, etc.

As we expand, developers can also expect additional leading edge topics such as chatbots, microservices, AI, and blockchain. There will also be sessions around our modern open source developer technologies including Oracle JET, Project Fn and OpenJFX.

Finally, one of the things that will continue to make this conference so great is the breadth of community run activities such as Oracle Code4Kids workshops for young developers, IGNITE lightning talks run by local JUG leaders, and an array of technology demos and community projects showcased in the Developer Lounge.  Expect a grand finale with the Developer Community Keynote to close out this week of fun, technology, and community.

Today, we are launching the call for papers for Oracle Code One and you can apply now to be part of any of the 11 tracks of content for Java developers, database developers, full stack developers, DevOps practitioners, and community members.  

I hope you are as excited about this expansion of JavaOne as I am and will join me at the inaugural year of Oracle Code One!

Please submit your abstracts here for consideration:
https://www.oracle.com/code-one/index.html

Beyond Chatbots: An AI Odyssey

Wed, 2018-04-18 06:00

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

One of the most surprising comments to emerge from this discussion reveals how a character from a 50 year-old feature film factors into one of the most pressing AI challenges.

According to podcast panelist Phil Gordon, CEO and founder of Chatbox.com, the HAL 9000 computer at the center of Stanley Kubrick’s 1968 science fiction classic “2001: A Space Odyssey” is very much on the minds of those now rushing to deploy AI-based solutions. “They have unrealistic expectations of how well AI is going to work and how much it’s going to solve out of the box.” (And apparently they're willing to overlook HAL's abysmal safety record.)

It's easy to see how an AI capable of carrying on a conversation while managing and maintaining all the systems on a complex interplanetary spaceship would be an attractive idea for those who would like to apply similar technology to keeping a modern business on course. But the reality of today’s AI is a bit more modest (if less likely to refuse to open the pod bay doors).

In the podcast, Lyudmil Pelov, a cloud solutions architect with Oracle’s A-Team, explains that unrealistic expectations about AI have been fed by recent articles that portray AI as far more human-like than is currently possible.

“Most people don't understand what's behind the scenes,” says Lyudmil. “They cannot understand that the reality of the technology is very different. We have these algorithms that can beat humans at Go, but that doesn't necessarily mean we can find the cure for the next disease.” Those leaps forward are possible. “From a practical perspective, however, someone has to apply those algorithms,” Lyudmil says.

For podcast panelist Brendan Tierney, an Oracle ACE Director and principal consultant with Oralytics, accessing relevant information from within the organization poses another AI challenge.  “When it comes to customer expectations, there's an idea that it's a magic solution, that it will automatically find and discover and save lots of money automatically. That's not necessarily true.”  But behind that magic is a lot of science.

“The general term associated with this is, ‘data science,’” Brendan explains. “The science to it is that there is a certain amount of experimental work that needs to be done. We need to find out what works best with your data. If you're using a particular technique or algorithm or whatever, it might work for one company, but it might not work best for you. You've got to get your head around the idea that we are in a process of discovery and learning and we need to work out what's best for your data in your organization and processes.”

For panelist Joris Schellekens, software engineer at iText, a key issue is that of retractability. “If the AI predicts something or if your system makes some kind of decision, where does that come from? Why does it decide to do that? This is important to be able to explain expectations correctly, but also in case of failure—why does it fail and why does it decide to do this instead of the correct thing?”

Of course, these issues are only a sampling of what is discussed by the experienced developers in this podcast. So plug in and gain insight that just might help you navigate your own AI odyssey.

The Panelists Phil Gordon
CEO/founder of Chatbox.com

Twitter LinkedIn 

Lyudmil Pelov
Oracle A-Team Cloud Architect, Mobile, Cloud and Bot Technologies, Oracle

Twitter LinkedIn 

Joris Schellekens
Software Engineer, iText

Twitter LinkedIn

Brendan Tierney
Consultant, Architect, Author, Oralytics

Twitter LinkedIn 

Additional Resources Coming Soon
  • The Making of a Meet-Up
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Announcing GraalVM: Run Programs Faster Anywhere

Tue, 2018-04-17 02:47

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers, but also for developers due to inconsistent performance characteristics, tooling, and configuration. Furthermore, communication between programs written in different languages requires costly serialization and deserialization logic. Finally, high performance VMs are heavyweight processes with high memory footprint and difficult to embed.

Several years ago, to address these shortcomings, Oracle Labs started a new research project for exploring a novel architecture for virtual machines. Our vision was to create a single VM that would provide high performance for all programming languages, therefore facilitating communication between programs. This architecture would support unified language-agnostic tooling for better maintainability and its embeddability would make the VM ubiquitous across the stack.

To meet this goal, we have invented a new approach for building such a VM. After years of extensive research and development, we are now ready to present the first production-ready release.

Introducing GraalVM

Today, we are pleased to announce the 1.0 release of GraalVM, a universal virtual machine designed for a polyglot world.

GraalVM provides high performance for individual languages and interoperability with zero performance overhead for creating polyglot applications. Instead of converting data structures at language boundaries, GraalVM allows objects and arrays to be used directly by foreign languages.

Example scenarios include accessing functionality of a Java library from Node.js code, calling a Python statistical routine from Java, or using R to create a complex SVG plot from data managed by another language. With GraalVM, programmers are free to use whatever language they think is most productive to solve the current task.

GraalVM 1.0 allows you to run:

- JVM-based languages like Java, Scala, Groovy, or Kotlin
- JavaScript (including Node.js)
- LLVM bitcode (created from programs written in e.g. C, C++, or Rust)
- Experimental versions of Ruby, R, and Python

GraalVM can either run standalone, embedded as part of platforms like OpenJDK or Node.js, or even embedded inside databases such as MySQL or the Oracle RDBMS. Applications can be deployed flexibly across the stack via the standardized GraalVM execution environments. In the case of data processing engines, GraalVM directly exposes the data stored in custom formats to the running program without any conversion overhead.

For JVM-based languages, GraalVM offers a mechanism to create precompiled native images with instant start up and low memory footprint. The image generation process runs a static analysis to find any code reachable from the main Java method and then performs a full ahead-of-time (AOT) compilation. The resulting native binary contains the whole program in machine code form for immediate execution. It can be linked with other native programs and can optionally include the GraalVM compiler for complementary just-in-time (JIT) compilation support to run any GraalVM-based language with high performance.

A major advantage of the GraalVM ecosystem is language-agnostic tooling that is applicable in all GraalVM deployments. The core GraalVM installation provides a language-agnostic debugger, profiler, and heap viewer. We invite third-party tool developers and language developers to enrich the GraalVM ecosystem using the instrumentation API or the language-implementation API. We envision GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages.

GraalVM in Production

Twitter is one of the companies deploying GraalVM in production already today for executing their Scala-based microservices. The aggressive optimizations of the GraalVM compiler reduces object allocations and improves overall execution speed. This results in fewer garbage collection pauses and less computing power necessary for running the platform. See this presentation from a Twitter JVM Engineer describing their experiences in detail and how they are using the GraalVM compiler to save money. In the current 1.0 release, we recommend JVM-based languages and JavaScript (including Node.js) for production use while R, Ruby, Python and LLVM-based languages are still experimental.

Getting Started

The binary of the GraalVM v1.0 (release candidate) Community Edition (CE) built from the GraalVM open source repository on GitHub is available here.

We are looking for feedback from the community for this release candidate. We welcome feedback in the form of GitHub issues or GitHub pull requests.

In addition to the GraalVM CE, we also provide the GraalVM v1.0 (release candidate) Enterprise Edition (EE) for better security, scalability and performance in production environments. GraalVM EE is available on Oracle Cloud Infrastructure and can be downloaded from the Oracle Technology Network for evaluation. For production use of GraalVM EE, please contact graalvm-enterprise_grp_ww@oracle.com.

Stay Connected

The latest up-to-date downloads and documentation can be found at www.graalvm.org. Follow our daily development, request enhancements, or report issues via our GitHub repository at www.github.com/oracle/graal. We encourage you to subscribe to these GraalVM mailing lists:

- graalvm-announce@oss.oracle.com
- graalvm-users@oss.oracle.com
- graalvm-dev@oss.oracle.com

We communicate via the @graalvm alias on Twitter and watch for any tweet or Stack Overflow question with the #GraalVM hash tag.

Future

This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby.

GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling. Please find more at www.graalvm.org on how to:

- allow your own language to run on GraalVM
- build language-agnostic tools for GraalVM
- embed GraalVM in your own application

We look forward to building this next generation technology for a polyglot world together with you!

Three Quick Tips API Platform CS - Gateway Installation (Part 3)

Thu, 2018-04-12 16:09

The part 2 of the series can be accessed here. Today, we keep it short and simple, here are three troubleshooting tips for Oracle API CS Gateway Installation:

  • If while running the "install" action, you see an output as something like:

           -bash-4.2$ ./APIGateway -f gateway-props.json -a install-configure-start-join
Please enter user name for weblogic domain,representing the gateway node:
weblogic
Password:
2018-03-22 17:33:20,342 INFO action: install-configure-start-join
2018-03-22 17:33:20,342 INFO Initiating validation checks for action: install.
2018-03-22 17:33:20,343 WARNING Previous gateway installation found at directory = /u01/oemm
2018-03-22 17:33:20,343 INFO Current cleanup action is CLEAN
2018-03-22 17:33:20,343 INFO Validation complete
2018-03-22 17:33:20,343 INFO Action install is starting
2018-03-22 17:33:20,343 INFO start action: install
2018-03-22 17:33:20,343 INFO Clean started.
2018-03-22 17:33:20,345 INFO Logging to file /u01/oemm/logs/main.log
2018-03-22 17:33:20,345 INFO Outcomes of operations will be accumulated in /u01/oemm/logs/status.log
2018-03-22 17:33:20,345 INFO Clean finished.
2018-03-22 17:33:20,345 INFO Installing Gateway
2018-03-22 17:33:20,718 INFO complete action: install isSuccess: failed detail: {}
2018-03-22 17:33:20,718 ERROR Action install has failed. Detail: {}
2018-03-22 17:33:20,718 WARNING Full-Setup execution incomplete. Please check log file for more details
2018-03-22 17:33:20,719 INFO Execution complete.

  The issue could be "/tmp" directory permissions. Please check that the tmp directory which is used by default by the OUI installer is not setup with "noexec","nosuid" or "nodev". Please check for other permission issues as well, Another possible area to investigate is the size allocated to "/tmp"  file system (should be greater than equal to 10 GB).

  • If sometime during running any of the installer actions, you get an "Invalid JSON object: .... " error, then, please check if the gateway-master.props file is not empty.  This can happen if for example you execute "ctrl+z" to exit an installer action. The best approach is to backup the gateway-master.json file and replace it, in case the above error happens. In worst case copy the gateway-master .
  •  If the "start" action is unable to start the managed server but the admin server starts Ok, then try changing the "publishAddress" property's value  to "listenIpAddress" property's value and try install,configure and start again. In other words "publishAddress"  = "listenIpAddress".

That is all for now, we will back soon with more.

 

 

 

Introducing Build Pipeline in Oracle Developer Cloud

Wed, 2018-04-11 16:05

With our current release we are introducing a new build engine in Oracle Developer Cloud. This new build engine is called Mako. The new build engine also comes with a new enhanced functionality and user interface in Oracle Developer Cloud Service ‘Build’ tab for defining build pipelines visually. This was the much awaited functionality in Oracle Developer Cloud from the Continuous Integration and Continuous Delivery perspective.

So what is changing in Developer Cloud build?

The below screen shot shows the user interface for the new ‘Build’ tab in Oracle Developer Cloud. A quick glance at it tells you that there is a new tab called ‘Pipeline’ being added alongside the ‘Jobs’ tab. So the concept of creating build jobs remains the same. We have Pipeline in addition to the build jobs that you can create.

Creating of build job has gone through a change as well. When you try to create a build job by clicking the ‘+New Job’ button in the Build tab, you will have a dialog box to create a new build job. The first screen shot shows the earlier ‘New Job ‘ dialog where you could give the job name and select to create a freestyle job or copy an existing build job.

The second screen shot shows the latest ‘New Job’ dialog that comes up in Oracle Developer Cloud.  It has a Job name, description (which you could give in the build configuration interface earlier), create new/copy existing job options, check box to select ‘use for merge request’ and the most noticeable addition the Software Template dropdown.

Dialog in the old build system:

Dialog in the new build system:

What these additional fields in the ‘New Job’ dialog mean?

Description: To give the job description, which you could give in the build configuration interface earlier. You will still be able to edit it in the build configuration as part of the settings tab.

Use for Merge Request: By selecting this option, your build will be parameterized to get the Git repo URL, Git repo branch and Git repo merge id and perform the merge as part of the build.

Software Template: With this release you will be using your own Oracle Compute Classic to run/execute your build jobs. Earlier the build jobs were executed on internal pool of compute. This gives you immense flexibility to configure you build machine using the software runtimes that you need using the user interface that we provide as part of the Developer Cloud Service. These configuration will stay and the build machines will not be claimed back as it is your own compute instance. This will also enable you to run multiple parallel builds without any constraint by spinning up new computes as per your requirements. You will be able to create multiple VM templates with different software configurations and choose them while creating build jobs as per your requirement.

Please use this link to refer the documentation for configuring Software Templates.

Build Configuration Screen:

In the build configuration tab you will now have two tabs as seen in the screen shot below.

  1. Build Configuration
  2. Build Settings

As seen in the screenshot below, the build configuration tab would further have Source Control tab, Build Parameters, Build Environment, Builders and Post Build sub tabs.

While in the build settings tab, you will have sub tabs such as General, Software, Triggers, and Advanced. Below are the brief description of each tab:

General: As seen in the screenshot below is for generic build job related details. It is similar to the Main tab which existed previously.

Software: This tab is a new introduction in the build configuration to support Software Templates for build machines, which is getting introduced in our current release as described above. It will let you change/see the software template that you have selected while creating the build job and also let you see the software (runtimes) available in the template. Please see the screenshot below for your reference.

Triggers: You will be able add build triggers like Periodic Trigger and SCM Polling Trigger as shown in the screenshot below. This is similar to the Triggers tab that existed earlier.

Advanced: Consists of some build settings related to aborting job conditions, retry count and adding timestamp to the console output.

In the Build Configuration Tab

There are four tabs in the Build Configuration tab as described below:

Source Control: You can add Git as the Source Control from the dropdown-‘Add Source Control’.

 

Build Parameters: Apart from the existing build parameters like String Parameter, Password Parameter, Boolean Parameter, Choice Parameter, there is a new parameter type being added called Merge Request Parameters. The Merge Request Parameters get added automatically when the checkbox ‘Use for Merge Request’ is selected while creating the build job. This will add Git repo URL, Git repo branch and Git repo merge id as the build parameters.

Build Environment: A new Build Environment settings have been added apart from the existing Xvfb Wrapper, Copy Artifacts and Oracle Maven Repository Connection, which is SonarQube Settings.

SonarQube Settings – For static code analysis using SonarQube tool. I will be publishing a separate blog on SonarQube in Developer Cloud.

Builders: To add build steps. There is an additions to the build steps, which is Docker Builder. 

Docker Builder: Support to build Docker images and execute any Docker command. (Will be releasing a separate blog for dockers.)

Post Build: To add Post Build configurations like deployment. SonarQube Result Publisher is the new Post Build configuration added in the current release.

Pipelines

After creating and configuring the build jobs, you can create a pipeline in the Pipelines tab using these build jobs. You can create a new pipeline using the ‘+New Pipeline’ button.

You will see the below dialog to create a new pipeline.

On creation of the Pipeline, you can drag and drop the build jobs using the Pipeline visual editor, sequence and connect the build jobs as per the requirement.

You can also add conditions to the connection for execution by double clicking the links and selecting the condition from the dropdown, as shown below in the screenshot.

Once completed, the pipeline will be listed in the Pipelines tab as shown below.

 

You can start the build manually using the play symbol button. We can also configure it to Auto Start when one of the job is executed externally.

Stay tuned for more blogs on latest features and capabilities of Developer Cloud Service. 

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Building Docker on Oracle Developer Cloud Service

Wed, 2018-04-11 13:40

The much awaited Docker build support on Oracle Developer Cloud Service is here. Now you will be able to build Docker images and execute Docker commands as part of the Continuous Integration and Continuous Deployment pipeline.

This blog covers the description of how and what you can do with the Docker build support on Developer Cloud Service. It will give an understanding of the Docker commands that we can run/execute on Developer Cloud as part of the build job.

Note: There will be a series of blogs following up on using Docker build on Developer Cloud covering different technology stacks and usage.

Build Job Essentials:

Pre-requisite to be able to run Docker commands or use Docker Build steps in the build job is that we should select a software template which has Docker included as a software bundle. Selecting the template with Docker ensures that the Build VM which gets instantiated using the selected software template has Docker runtime installed on it, as shown in the below screen shot. The template names may vary in your instance.

To know about the new build system you can read this blog link. Also you can refer the documentation on configuring the Build VM.

 

You will be able to verify whether Docker is part of the slected VM or not by navigating to Build -> <Build Job> -> Build Settings -> Software

You can refer this link to understand more about the new build interface on Developer Cloud.

 

Once you have the Build Job created with the right Software Template selected as described above, go to the Builders tab in the Build Job and click on the Add Builder. You will see Docker Builder in dropdown as shown in the screen shot below. Selecting Docker Builder would give you Docker command options which are given out of the box.

You can run all other Docker commands as well, by selecting Unix Shell Builder and writing your Docker command in it.

In the below screen shot you can see two commands selected from the Docker Builder menu.

Docker Version – This command interface prints the Docker version installed on your Build VM.

Docker Login – Using this command interface you can login and create connection with the Docker Registry. By default it is DockerHub but you can use Quay.io or any other Docker registry available over the internet. If you leave the Registry Host empty then by default it will connect to DockerHub.

 

Docker build – Using this command interface you can build a Docker Image in Oracle Developer Cloud.  You will have to have a Dockerfile in the Git repository that you will be configuring in the Build Job. The path of the Dockerfile has to be mentioned in the Dockerfile field. In case the Dockerfile resides in the build context root, you can leave the field empty. You will have to give the image name.

 

Docker Push – Now to push the Docker image that you have built using Docker Build command interface to the Docker Registry. You will have to first use Docker Login to create a connection to the Docker Registry where you want to push the image. Then use the Docker Push command giving the exact name of the image built as you had given in Docker Build command.

 

Docker rmi – To remove the Docker images we have build.

As mentioned previously, you can run any Docker command in Developer Cloud.  If the UI for the command is not given, you can use Unix Shell Builder to write and execute your Docker command.

In my follow up blog series I will using a combination of the out of the box command interface and Unix Shell Builder to execute Docker commands and get build tasks accomplished. So watch out for the upcoming blogs here.

Happy Dockering on Developer Cloud!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Build and Deploy .Net Code using Oracle Developer Cloud

Wed, 2018-04-11 13:27

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here.

This blog post will show how you can use Oracle Developer Cloud Service to build .Net code and deploy it on Oracle Application Container Cloud. It will show how the newly released Docker build support in Developer Cloud can be leveraged to perform the build.

Technology Stack Used:

Application Stack: .Net for developing ASPX pages

Build: Docker for compiling the .Net code

Packaging Tool: Grunt to package the compiled code

DevOps Cloud Service: Oracle Developer Cloud

Deployment Cloud Service: Oracle Application Container Cloud

OS for Development: Windows 7

 

.Net application code source:

 The ASP.Net application that we would be building and deploying on Oracle Cloud using Docker can be downloaded from the Git repository on GitHub. Below is the link for the same.

https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp

If you want to clone the GitHub repository then use the below git command after installing the git cli on your machine.

git clone https://github.com/dotnet/dotnet-docker-samples/

After cloning the above mentioned repository you can pick to use the aspnetapp. Below is the folder structure of the cloned aspnetapp.

 

Apart from the four highlighted files in the screen shot below, which are essential for the deployment, rest all the other files and folder are part of the .Net application.

Note: You may not be able to see the .get folder as you have not initialized the Git repository.

Now we need to initialize the Git repository for the aspnetappl as we will be pushing this code to the Git repo hosted on Oracle Developer Cloud. Below are the commands that you can use on you command line after installing Git Cli and configuring the same in the path.

Command prompt > cd <to the aspnetappl folder>

Command prompt > git init

Command prompt > git add –all

Command prompt > git commit –m “First Commit”

Above mentioned git commands will initialize the git repository locally in the application folder. And then add all the code in the folder to the local Git repository using the git add –all command.

Then commit the added files by using the git commit command, as shown above.

Now go to Oracle Developer Cloud project and create a Git repository for the .Net code to be pushed to. For the purpose of this blog I have created the Git repository by clicking the ‘New Repository’ button and named it as ‘DotNetDockerAppl’, as shown in the screenshot below. You may choose to name it as per your choice.

Copy the Git repository URL as shown below.

Then add the URL as the remote repository to the local Git repository that we have created using the below command:

 Command prompt > git remote add origin <Developer Cloud Git repository URL>

The use the below command to push the code to the master branch of the Developer Cloud hosted Git repository.

Command prompt > git push origin master

 

Deployment related files that need to be created: Dockerfile

This file will be used by the Docker Tool to build the Docker image with the .Net core installed and it would also include the .Net application code, cloned from the Developer Cloud Git repository. You will be getting the Dockerfile as part of the project. Please replace the existing Dockerfile script with the one below.

 

FROM microsoft/aspnetcore-build:2.0

WORKDIR /app

# copy csproj and restore as distinct layers

COPY *.csproj ./

RUN dotnet restore

# copy everything else and build

COPY . ./

RUN dotnet publish -c Release -r linux-x64

In the above script we download the aspnetcore-build:2.0 image, then create a work directory where we copy the .csproj file and then copy all the code from the Git repo. Finally use the ‘dotnet’ command to publish the compiled code, compliant with linux-x64 machine.

manifest.json

This file is essential for the deployment of the .Net application on the Oracle Application Container Cloud.

{

 "runtime":{

 "majorVersion":"2.0.0-runtime"

 },

 "command": "dotnet AspNetAppl.dll"

}

The command attribute in the json, specifies the dll to be executed by the dotnet command. It also specifies the .Net version to be used for executing the compiled code.

 

Gruntfile.js

This file defines the build task and is being used by the Build file to identify the deployment artifact type that needs to be generated, which in this case is a zip file and also the files from the project that need to be included in the build artifact. For the .Net application we would only need to include everything in the publish folder including the manifest.json for Application Container Cloud deployment. The folder is defined in the src attribute as shown in the code snippet below.

 

/**

 * http://usejsdoc.org/

 */

module.exports = function(grunt) {

  require('load-grunt-tasks')(grunt);

  grunt.initConfig({

    compress: {

      main: {

        options: {

          archive: 'AspNetAppl.zip',

          pretty: true

        },

        expand: true,

        cwd: './publish',

        src: ['./**/*'],

        dest: './'

      }

    }

  });

  grunt.registerTask('default', ['compress']);

};

package.json

Since Grunt is Nodejs based build tool, which we are using in this blog to build and package the deployment artifact, we would need the package json file to define the dependencies required for the Grunt tool to execute.

{

  "name": "ASPDotNetAppl",

  "version": "0.0.1",

  "private": true,

  "scripts": {

    "start": "dotnet AspNetAppl.dll"

  },

  "dependencies": {

    "grunt": "^0.4.5",

    "grunt-contrib-compress": "^1.3.0",

    "grunt-hook": "^0.3.1",

    "load-grunt-tasks": "^3.5.2"

  }

}

Once all the code is pushed to the Git repository hosted on Oracle Developer Cloud. Below screenshot, shows how you can browse and verify your code by going to the Code tab and selecting the appropriate Git repository and branch in the respective dropdowns on top of the files list.

 

Build Job Configuration on Developer Cloud

We are going to use the newly introduced Mako build instead of the Hudson build system in DevCS.

Below are the build job configuration screen shots for the ‘DotNetBuild’ which will build and deploy the .Net application:

Create a build job by clicking on the “New Job” button. Give a name of your choice to the build job. For this blog I have named it as ‘DotNetBuild’. You will also need to select the Software Template which contains Docker and Nodejs runtimes. In case you do see the required software template in the dropdown,as shown in the screenshot below, you will have to configure the same from Organization -> VM Template menu. This will kick start the Build VM with the required software template. To understand and learn more about configuring VM and VM Templates you can refer this link.

 

Now go to the Builders tab where we would configure the build steps. First we would select execute shell where we would build the Docker image using the Dockerfile in our Git repository. Then create a container for the same (but not start the container). Then copy the compiled code to the build machine from the container and then use npm registry to download the grunt build tool dependencies. Finally, use the grunt command to build the AspNetAppl.zip file which will be deployed on Application Container Cloud.

 

 

Now configure the PSM Cli and configure the credentials for your ACCS instance along with the domain name. Then again configure Unix Shell builder where you will have to provide the psm command to deploy the zip file on Application Container that we have generated earlier using Grunt build tool.

Note: All this will be done in the same ‘DotNetBuild’ build job that we have created earlier.

 

AS part of the last part of build configuration, in the Post Build tab configure the Artifact Archiver as show below, to archive the generated zip file for deployment.

 

Below screen shot show the ‘DotNet’ application deployed on Application Container Cloud service console. Copy the application URL as shown in the screen shot. The URL will vary for your cloud instance.

 

Use the copied URL to access the deployed .Net application on a browser. It will look like as shown in the below screen shot.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Introducing the Oracle MySQL Operator for Kubernetes

Wed, 2018-03-28 00:30

(Originally published on Medium)

Introduction

Oracle recently open sourced a Kubernetes operator for MySQL that makes running and managing MySQL on Kubernetes easier.

The MySQL Operator is a Kubernetes controller that can be installed into any existing Kubernetes cluster. Once installed, it will enable users to create and manage production-ready MySQL clusters using a simple declarative configuration format. Common operational tasks such as backing up databases and restoring from an existing backup are made extremely easy. In short, the MySQL Operator abstracts away the hard work of running MySQL inside Kubernetes.

The project started as a way to help internal teams get MySQL running in Kubernetes more easily, but it quickly become clear that many other people might be facing similar issues.


Features

Before we dive into the specifics of how the MySQL Operator works, let’s take a quick look at some of the features it offers:

Cluster configuration

We have only two options for how a cluster is configured.

  • Primary (in this mode the group has a single-primary server that is set to read-write mode. All the other members in the group are set to read-only mode)
  • Multi-Primary (In multi-primary mode, there is no notion of a single primary. There is no need to engage an election procedure since there is no server playing any special role.)
Cluster management
  • Create and scale MySQL clusters using Innodb and Group Replication on Kubernetes
  • When cluster instances die, the MySQL Operator will automatically re-join them into the cluster
  • Use Kubernetes Persistent Volume Claims to store data on local disk or network attached storage.
Backup and restore
  • Create on-demand backups
  • Create backup schedules to automatically backup databases to Object Storage (S3 etc)
  • Restore a database from an existing backup
Operations
  • Run on any Kubernetes cluster (Oracle Cloud Infrastructure, AWS, GCP, Azure)
  • Prometheus metrics for alerting and monitoring
  • Self healing clusters
The Operator Pattern

A Kubernetes Operator is simply a domain specific controller that can manage, configure and automate the lifecycle of stateful applications. Managing stateful applications, such as databases, caches and monitoring systems running on Kubernetes is notoriously difficult. By leveraging the power of Kubernetes API we can now build self managing, self driving infrastructure by encoding operational knowledge and best practices directly into code. For instance, if a MySQL instance dies, we can use an Operator to react and take the appropriate action to bring the system back online.


How it works

The MySQL Operator makes use of Custom Resource Definitions as a way to extend the Kubernetes API. For instance, we create custom resources for MySQLClusters and MySQLBackups. Users of the MySQL Operator interact via these third party resource objects. When a user creates a backup for example, a new MySQLBackup resource is created inside Kubernetes which contains references and information about that backup.

The MySQL Operator is, at it’s core, a simple Kubernetes controller that watches the API server for Customer Resource Definitions relating to MySQL and acts on them.


HA / Production Ready MySQL Clusters

The MySQL Operator is opinionated about the way in which clusters are configured. We build upon InnoDB cluster (which uses Group Replication) to provide a complete high availability solution for MySQL running on Kubernetes.


Examples

The following examples will give you an idea of how the MySQL Operator can be used to manage your MySQL Clusters.


Create a MySQL Cluster

Creating a MySQL cluster using the Operator is easy. We define a simple YAML file and submit this directly to Kubernetes via kubectl. The MySQL operator watches for MySQLCluster resources and will take action by starting up a MySQL cluster.

apiVersion: "mysql.oracle.com/v1" kind: MySQLCluster metadata:   name: mysql-cluster-with-3-replicas spec:   replicas: 3

You should now be able to see your cluster running

There are several other options available when creating a cluster such as specifying a Persistent Volume Claim to define where your data is stored. See the examples directory in the project for more examples.


Create an on-demand backup

We can use the MySQL operator to create an “on-demand” database backup and upload it to object storage.

Create a backup definition and submit it via kubectl.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackup metadata: name: mysql-backup spec: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster

You can now list or fetch individual backups via kubectl

kubectl get mysqlbackups

Or fetch an individual backup

kubectl get mysqlbackup api-production-snapshot-151220170858 -o yaml
Create a Backup Schedule

Users can attach schedule backup policies to a cluster so that backups get created on a given cron schedule. A user may be create multiple backup schedules attached to a single cluster if required.

This example will create a backup of a cluster test database every hour and upload it to Oracle Cloud Infrastructure Object Storage.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackupSchedule metadata: name: mysql-backup-schedule spec: schedule: '30 * * * *' backupTemplate: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster Roadmap

Some of the features on our roadmap include

  • Support for MySQL Enterprise Edition
  • Support for MySQL Enterprise Backup
Conclusion

The MySQL Operator showcases the power of Kubernetes as a platform. It makes running MySQL inside Kubernetes easy by abstracting complexity and reducing operational burden. Although it is still in very early development, the MySQL Operator already provides a great deal of useful functionality out of the box.

Visit https://github.com/oracle/mysql-operator to learn more. We welcome contributions, ideas and feedback from the community.

If you want to deploy MySQL inside Kubernetes, we recommend using the MySQL Operator to do the heavy lifting for you.

 

Links

 

 

 

Announcing Terraform support for Oracle Cloud Platform Services

Mon, 2018-03-26 06:03

Oracle and HashiCorp are pleased to announce the immediate availability of the Oracle Cloud Platform Terraform provider.

The initial release of the Oracle Cloud Platform Terraform provider supports the creation and lifecycle management of Oracle Database Cloud Service and Oracle Java Cloud Service instances.

With the availability of the Oracle Cloud Platform services support, Terraform’s “infrastructure-as-code” configurations can now be defined for deploying standalone Oracle PaaS services, or combined with the Oracle Cloud Infrastructure and Infrastructure Classic services supported by the opc and oci providers for complete infrastructure and application deployment.

Supported PaaS Services

The following Oracle Cloud Platform services are supported by the initial Oracle Cloud Platform (PaaS) Terraform provider. Additional services/resources will be added over time.

  • Oracle Database Cloud Service Instances
  • Oracle Database Cloud Service Access Rules
  • Oracle Java Cloud Service Instances
  • Oracle Java Service Access Rules
Using the Oracle Cloud Platform Terraform provider

To get started using Terraform to provision the Oracle Cloud Platform services lets looks at an example of deploying a single Java Cloud Service instance, along with its dependent Database Cloud Service instance.

First we declare the provider definition, providing the account credentials and the appropriate service REST API endpoints. The Identity Domain name, Identity Service ID and REST endpoint URL can be found in the Service details section from on My Services Dashboard

For IDCS Cloud Accounts use the Identity Service ID for the identity_domain.

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "idcs-5bb188b5460045f3943c57b783db7ffa" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" }

For Traditional Accounts use the account Identity Domain Name for the identity_domain

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "mydomain" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" } Database Service Instance configuration

The oraclepaas_database_service_instance resource is used to define the Oracle Database Cloud service instance. A single terraform database service resource definition can represent configurations ranging from a single instance Oracle Database Standard Edition deployment, to a complete multi node Oracle Database Enterprise Edition with RAC and Data Guard for high availability and disaster recovery.

Instances can also be created for backups or snapshots of another Database Service instance. For this example we’ll create a new single instance database for use with the Java Cloud Service configured later further down.

resource "oraclepaas_database_service_instance" "database" { name = "my-terraformed-database" description = "Created by Terraform" edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m" ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } }

Lets take a closer look at the configuration settings. Here we are declaring that this is an Oracle Database 12c Release 2 (12.2.0.1) Enterprise Edition instance with a oc1m (1 OCPU/15Gb RAM) shape and with hourly usage metering.

edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m"

The ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The database_configuration block sets the initial configuration for the actual Database instance to be created in the Database Cloud service, including the database SID, the initial password, and the initial usable block volume storage for the database.

database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 }

The backup_destination configure if backup are on the Object Storage Servive (OSS), both object storage and local storage (BOTH), or disabled (NONE). A backup destination of OSS or BOTH is required for database instances that as used in combination with Java Cloud service instances

The Object Storage Service location and access credentials are configured in the backups block

backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } Java Cloud Service Instance

The oraclepaas_java_service_instance resource is used to define the Oracle Java Cloud service instance. A single Terraform resource definition can represent configurations ranging from a single instance Oracle WebLogic Server deployment, to a complete multi-node Oracle WebLogic cluster with a Oracle Coherence data grid cluster and an Oracle Traffic Director load balancer.

Instances can also be created from snapshots of another Java Cloud Service instance. For this example we’ll create a new two node Weblogic cluster with a load balancer, and associated to the Database Cloud Service instance defined above.

resource "oraclepaas_java_service_instance" "jcs" { name = "my-terraformed-java-service" description = "Created by Terraform" edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY" enable_admin_console = true ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" weblogic_server { shape = "oc1m" managed_servers { server_count = 2 } admin { username = "weblogic" password = "Weblogic_1" } database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" } } oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-java-service-backup" auto_generate = true } }

Let break this down. Here we are declaring that this is a 12c Release 2 (12.2.1.2) Enterprise Edition Java Cloud Service instance with hourly usage metering.

edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY"

Again the ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The weblogic_server block provides the configuration details for the WebLogic Service instances deployed for this Java Cloud Service instance. The weblogic_server definition sets the instance shape, in this case a oc1m (1 OCPU/15Gb RAM).

The admin block sets the WebLogic server admin user and initial password.

admin { username = "weblogic" password = "Weblogic_1" }

The database block connects the WebLogic server to the Database Service instance already defined above. In this example we are assuming the database and java service instances are declared in the same configuration, so we can fetch the database configuration values.

database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" }

The oracle_traffic_director block configures the load balancer that directs traffic to the managed WebLogic server instances.

oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } }

By default the load balancer will be configured with the same admin credentials defined in the weblogic_server block, different credentials can also be configured if required.  If the insecure port is not set then only the secured_port is enabled

Finally, similar to the Database Cloud service instance configuration, the backups block sets the Object Storage Service location for the Java Service instance backups.

backups { cloud_storage_container = "Storage-${var.domain}/-backup" auto_generate = true } Provisioning

With the provider and resource definitions configured in a terraform project (e.g all in a main.tf file), deploying the above configuration is a simple as:

$ terraform init $ terraform apply

The terraform init command will automatically fetch the latest version of the oraclepaas provider. terraform apply will start the provisioning. The complete provisioning of the Database and Java Cloud Instances can be a long running operation. To remove the provisioning instance run terraform destroy

Related Content

Terraform Provider for Oracle Cloud Platform

Terraform Provider for Oracle Cloud Infrastructure

Terraform Provider for Oracle Cloud Infrastructure Classic

Part II: Data processing pipelines with Spring Cloud Data Flow on Oracle Cloud

Thu, 2018-03-22 00:30

This is the 2nd (and final) part of this blog series about Spring Cloud Data Flow on Oracle Cloud

In Part 1, we covered some of the basics, infrastructure setup (Kafka, MySQL) and at the end of it, we had a fully functional Spring Cloud Data Flow server on the cloud — now its time to put it to use !

In this part, you will

  • get a technical overview of solution and look at some internal details — whys and hows
  • build and deploy a data flow pipeline on Oracle Application Container Cloud
  • and finally test it out…
Behind the scenes

Before we see things in action, here is an overview so that you understand what you will be doing and get (a rough) idea of why it’s working the way it is

At a high level, this is how things work in Spring Cloud Data Flow (you can always dive into the documentation for details)

  • You start by registering applications — these contain the core business logic and deal with how you would process the data e.g. a service which simply transforms the data it receives (from the messaging layer) or an app which pumps user events/activities into a message queue
  • You will then create a stream definition where you will define the pipeline of your data flow (using the apps which you previously registered) and then deploy them
  • (here is the best part!) once you deploy the stream definition, the individual apps in the pipeline, which will get automatically deployed to Oracle Application Container Cloud, thanks to our custom Spring Cloud Deployer SPI implementation (this was briefly mentioned in Part 1)

At a high level, the SPI implementation needs to adhere to the contract/interface outlined by

org.springframework.cloud.deployer.spi.app.AppDeployer and provide implementation for the following methods — deploy, undeploy, status and environmentInfo

Thus the implementation handles the life cycle of the pipeline/stream processing applications

  • creation and deletion
  • providing status information
Show time…! App registration
We will start by registering our stream/data processing applications

As mentioned in Part 1, Spring Cloud Data Flow uses Maven as one of its sources for the applications which need to be deployed as a part of the pipelines which you build — more details here and here

You can use any Maven repo — we are using Spring Maven repo since we will be importing their pre-built starter apps. Here is the manifest.json where this is configured

{   "runtime": {     "majorVersion": "8"   },   "command": "java -jar spring-cloud-dataflow-server-accs-1.0.0-SNAPSHOT.jar    --server.port=$PORT    --maven.remote-repositories.repo1.url=http://repo.spring.io/libs-snapshot    --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=$OEHPCS_EXTERNAL_CONNECT_STRING     --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=<event_hub_zookeeper_IP>:<port>",   "notes": "ACCS Spring Cloud Data Flow Server" }

manifest.json for Data Flow server on ACCS

Access the Spring Cloud Data Flow dashboard — navigate to the application URL e.g. https://SpringCloudDataflowServer-mydomain.apaas.us2.oraclecloud.com/dashboard

Spring Cloud Data Flow dashboard

For the purpose of this blog, we will import two pre-built starter apps

http
  • Type — source
  • Role — pushes pushes data to the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:http-source-kafka:1.0.0.BUILD-SNAPSHOT

log

  • Type — sink
  • Role — consumes data/events from the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:log-sink-kafka:1.0.0.BUILD-SNAPSHOT

There is another category of apps known as processor — this is not covered for the sake of simplicity

There are a bunch of these starter apps which make it super easy to get going with Spring Cloud Data Flow!

Importing applications

After app registration, we can go ahead and create our data pipeline. But, before we do that, let’s quickly glance at what it will do…

Overview of the sample pipeline/data flow

Here is the flow which the pipeline will encapsulate — you will see this in action once you reach the Test Drive section.. so keep going !

  • http app -> Kafka topic
  • Kafka -> log app -> stdout

The http app will provide a REST endpoint for us to POST messages to it and these will be pushed to a Kafka topic. The log app will simply consume these messages from the Kafka topic and then spit them out to stdout — simple!

Create & deploy a pipeline

Lets start creating stream — you can pick from the list of source and sink apps which we just imported ( http and log )

 

Use the below stream definition — just replace KafkaDemo with the name of your Event Hub Cloud service instance which you had setup in the Infrastructure setup section in Part 1

http --port=$PORT --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]' | log --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]'

Stream definition

You will see a graphical representation of the pipeline (which is quite simple in our case)

Stream definition


Create (and deploy) the pipeline

Deploy the stream definition

The deployment process will get initiated and the same will be reflected on the console

Deployment in progress….

Go back to the Applications menu in Oracle Application Container Cloud to confirm that the individual app deployment has also got triggered

Deployment in progress…

Open the application details and navigate to the Deployments section to confirm that both apps have service binding to the Event Hub instances as specified in the stream definition

Service Binding to Event Hub Cloud

After the applications are deployed to Oracle Application Container Cloud, the state of the stream definition will change to deployed and the apps will also show up in the Runtime section

 

Deployment complete

 

Spring Cloud Data Flow Runtime menu

Connecting the dots..
Before we jump ahead and test our the data pipeline we just created, here are a couple of pictorial representations to summarize how everything connects logically

Individual pipeline components in Spring Cloud Data Flow map to their corresponding applications in Oracle Application Container Cloud — deployed via the custom SPI implementation (discussed above as well as in part 1)

Spring Cloud Data Flow pipeline to application mapping

.. and here is where the logical connection to Kafka is depicted

  • http app pushes to Kafka topic
  • the log app consumes from Kafka topic and emits the messages to stdout
  • the topics are auto-created in Kafka by default (you can change this) and the naming convention is the stream definition (DemoStream) and the pipeline app name (http) separated by a dot (.)

Pipeline apps interacting with Kafka

Test drive

Time to test the data pipeline…

Send messages via the http (source) app
POST a few messages to the REST endpoint exposed by the http app (check its URL from the Oracle Application Container Cloud console) — these messages will be sent to a Kafka topic and consumed by the log app

curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test1 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test12 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test123

Check the log (sink) service
Download logs for log app to confirm . Navigate to the application details and check out the Logs tab in the Administration section — documentation here

Check logs

You should see the same messages which you sent to the HTTP endpoint

 

Messages from Kafka consumed and sent to stdout

There is another way…
What you can also do is to validate this directly using Kafka (on Event Hub cloud) itself — all you need is to create a custom Access Rule to open port 6667 on the Kafka Server VM on Oracle Event Hub Cloud — details here

You can now inspect the Kafka topic directly by using the console consumer and then POSTing messages to the HTTP endpoint (as mentioned above)

kafka-console-consumer.bat --bootstrap-server <event_hub_kakfa_IP>:6667 --topic DemoStream.http

Un-deploy
If you trigger an un-deployment or destroy of the stream definition, it will trigger an app deletion from Oracle Application Container Cloud

Un-deploy/destroy the definition

Quick recap

That’s all for this blog and it marks the end of this 2-part blog series!

  • we covered the basic concepts & deployed a Spring Cloud Data Flow server on Oracle Application Container Cloud along with its dependent components which included…
  • Oracle Event Hub Cloud as the Kafka based messaging layer, and Oracle MySQL Cloud as the persistent RDBMS store
  • we then explored some behind the scenes details and made use of our Spring Cloud Data Flow setup where …
  • … we built & deployed a simple data pipeline along with its basic testing/validation
Don’t forget to…
  • check out the tutorials for Oracle Application Container Cloud — there is something for every runtime!
  • other blogs on Application Container Cloud

Cheers!

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development

Wed, 2018-03-21 18:40

There is little in our lives that does not rely on software. That has been the reality for quite some time, and it will be even more true as self-driving cars and similar technologies become an even greater part of our lives. But as our reliance on software grows, so does the potential for disaster as software becomes increasingly complex.

In September 2017 The Atlantic featured “The Coming Software Apocalypse,” an article by James Somers that offers a fascinating and sobering look at how rampant code complexity has caused massive failures in critical software systems, like the 2014 incident that left the entire state of Washington without 9-1-1 emergency call-in services until the problem was traced to software running on a server in Colorado.

The article suggests that the core of the complexity problem is that code is too hard to think about. When and how did this happen?  

“You have to talk about the problem domain,” says Chris Newcombe,”because there are areas where code clearly works fine.” Newcombe, one of the people interviewed for the Atlantic article, is an expert on combating complexity, and since 2014 has been an architect on Oracle’s Bare Metal IaaS team.

“I used to work in video games,” Newcombe says. “There is lots of complex code in video games and most of them work fine. But if you're talking about control systems, with significant concurrency or affecting real-world equipment, like cars and planes and rockets or large-scale distribution systems, then we still have a way to go to solve the problem of true reliability. I think it's problem-domain specific. I don't think code is necessarily the problem. The problem is complexity, particularly concurrency and partial failure modes and side effects in the real world.”
 
Java Champion Adam Bien believes that in constrained environments, such as the software found in automobiles, “it's more or less a state machine which could or should be coded differently. So it really depends on the focus or the context. I would say that in enterprise software, code works well. The problem I see is more if you get newer ideas -- how to reshape the existing code quickly. But also coding is not just about code. Whether you write code or draw diagrams, the complexity will remain the same.”

Java Champion and microservices expert Chris Richardson agrees that “if you work hard enough, you can deliver software that actually works.” But he questions what is actually meant when software is described as “working well.”

“How successful are large software developments?” Richardson asks. “Do they meet requirements on time? Obviously that's a complex issue around project management and people. But what's the success rate?”

Richardson also points out that concerns about complexity are nothing new. “If you go back and look at the literature 30 or 40 years ago, people were concerned about software complexity then.”

The Atlantic article mentions that in most cases software does exactly what it was designed to do, an indication that it's not really a failure of the software as much as of the design of the software.

According to Developer Champion and Oracle ACE Director Lucas Jellema, “The complexity may not be in the software, but in the translation of the real-world problem or requirement into software. That starts not with coding, but with communication from one human being to another, from business end user to analyst to developer and maybe even some layers in between. That's where it usually goes wrong. In the end the software will do what the programmer told it to do, but that might not be what the business user or the real world requires it to do.”

Communication between stakeholders is only one aspect of the battle to reduce software complexity, and it’s just one issue among many that Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss in this podcast. So settle in and listen.

This program was recorded on November 22, 2017.

The Panelists

(In alphabetical order)

Adam Bien
Java Champion
Oracle ACE Director
Twitter Lucas Jellema
CTO, AMIS Services
Oracle Developer Champion
Oracle ACE Director
Twitter  LinkedIn
  Chris Newcombe
Architect, Oracle Bare Metal IaaS Team
 LinkedIn 
  Chris Richardson
Founder, Eventuate. Inc
Java Champion
Twitter LinkedIn Additional Resources Coming Soon
  • AI Beyond Chatbots: How is AI being applied to modern applications?
  • Microservices, API Management, and Modern Enterprise Software Architecture

Pages