Streaming Steam games from Amazon EC2 to Steam Link over OpenVPN tunnel featuring Pfsense and VMware

Oh have I longed to write this blog post, ever since I bought a Steam Link for myself as a christmas gift I’ve been wanting to make use of it. I’m the kind of person who sometimes (a bit too often) buys stuff first and motivates the purchase later (sometimes with a bit too much infrastructure).

Anyways this blog post was a starting point for me:
Revised and much faster, run your own high-end cloud gaming service on EC2!

Back in February I gave it a try but never got it to work, I wasn’t able to ping my local machines from my EC2 machine over my OpenVPN tunnel. This confused me a lot and I left it for a while. Tried again last week and got it to work, the magic was that since I’m running my Pfsense instance in VMWare I had to set my network card in promiscuous mode (yes it’s called that and it means basically that it sends packets everywhere).

After network card was in promiscuous mode everything just worked out, I downloaded a couple of games and when I started a Steam client on my local network it just said that I could start streaming from the Windows machine I had in EC2.

In the blog post above the connection is made from your local machine to EC2 but I’m doing it in the other direction so I’m going to explain that in more detail here. Also since the premade EC2 Gaming AMI is a couple of years old I had to update Windows, Steam and Nvidia drivers but I’ll go through that too.

EC2

These are the steps needed to get the machine up and running in EC2, refer back to the original blog post for details.

  1. Launch the ec2gaming machine in EC2 as a g2.2xlarge spot instance, this is documented in the blog post already. I create a Security Group with full access for my public IP address, you can of course be more restrictive by only allowing RDP.
  2. Connect to the PC (this works even on a Mac with Microsoft Remote Desktop)
  3. Change the password on first login (you don’t have a choice)
  4. Run Windows update (this will download about 1 GB of updates as of July 2018)
  5. Download Nvidia drivers NVIDIA GRID K520/K340 RELEASE 320 from here and upgrade them
  6. Uninstall OpenVPN (from the Start menu) and download a newer version from here. Don’t install OpenVPN Service, it’s not needed.
  7. Now is the time to take a snapshot of the machine since a spot instance is always terminated when you turn it off. You can do this manually from the AWS Console or using the gaming-down.sh script as described no the blog, if using the scripts in the future it’s a good idea to create an IAM user with limited access since the credentials are in clear text in the script.

I’ve created a pretty narrow policy for the IAM user that runs gaming-up.sh and gaming-down.sh

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ec2:DeleteSnapshot",
            "Resource": "arn:aws:ec2:*::snapshot/*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ec2:TerminateInstances",
            "Resource": "arn:aws:ec2:*:*:instance/*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeImages",
                "ec2:DescribeSpotPriceHistory",
                "ec2:CancelSpotInstanceRequests",
                "ec2:DeregisterImage",
                "ec2:DescribeInstances",
                "ec2:RequestSpotInstances",
                "ec2:CreateImage",
                "ec2:DescribeSpotInstanceRequests"
            ],
            "Resource": "*"
        }
    ]
}

Pfsense

I’m using Pfsense at home instead of a normal router, it runs in VMWare ESXi (5.1 at the moment but upgrade is coming) and works like a charm. I will not go into details about Pfsense since I assume if you’re reading this you are kind a geek anyways. Follow the steps below to set up a OpenVPN server in Pfsense that your EC2 machine can connect to.

I used the information in this blog post to set up OpenVPN:
Create a stretched LAN between your site and vCloud using pfSense

  1. Create the OpenVPN server according to these settings, instead of using screenshots I printed my configuration page as a PDF. Most of it is standard and it’s all described in the blog post about the stretched LAN
  2. Go to Interfaces / Interface Assignments and assign the aws-lan-bridged Network port as OPT1 or whatever name you like
  3. The firewall will probably have created some rules for you OpenVPN server so might not have to create the ones for the inbound traffic (WAN port 1194) but create the other rules as described in the blog post.
  4. Create the Bridge as described (it should consist of LAN and OPT1

That’s what you need on the Pfsense side of things but if you’re like me using VMWare as a hypervisor you will need to do 1 more thing as I found here after serious Googling why I couldn’t reach my internal network from EC2.

Login to your ESXi and from the command line you need to issue a command kind of like this:

esxcli network vswitch standard policy security set --allow-promiscuous=true --vswitch-name=vSwitch0

Assuming your vswitch is named vSwitch0, I only had 1 so it wasn’t that hard but please refer to the VMWare documentation. Your version might differ from mine since I’m on ESXi 5.1.

Connecting

We have an EC2 machine and we have Pfsense OpenVPN server. Now we need a client configuration for the Windows machine and it looks like this:

dev tap
persist-key
cipher AES-128-CBC
auth SHA1

resolv-retry infinite
proto udp
remote YOUR-PFSENSE-HOSTNAME 1194
keepalive 10 60
ping-timer-rem
<secret>
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
THIS BLOCK SHOULD BE COPIED FROM
Shared Key
IN THE 
Cryptographic Settings
SECTION OF THE OPENVPN SERVER
CONFIGURATION IN PFSENSE
-----END OpenVPN Static key V1-----
</secret>

Create a file called client.ovpn and on your Windows Server then right-click on the OpenVPN GUI taskbar icon and chose Import file…

Right click the OpenVPN GUI icon again and you should have a menu option client and under that connect. Chose connect and you should be connected to your LAN.

Steam

We left the fun stuff for last, open Steam and login with your credentials and make sure it’s configured for streaming, this is described in the first blog link. On your local network your other Steam client(s) should pick up that there’s a new device available for streaming.

Boot up your Steam Link and enjoy gaming!

Beware of shutting down the streaming server from Steam Link, this will terminate the instance since it’s a Spot Instance.

Exporting Salesforce Files (aka ContentDocument)

Last week a client asked me to help out, we had been creating a system that creates PDF files in Salesforce using Drawloop (today known as Nintex Document Generation which is a boring name).

Anyways, we had about 2000 PDF created in the system and after looking into it there doesn’t seem to be a way to download them in bulk. Sure you can use the Dataloader and download them but you’ll get the content in a CSV column and that doesn’t really fly with most customers.

I tried dataloader.io, Realfire and search through every link on Google or at least the first 2 pages and I didn’t find a good way of doing it.

There seems to be an old AppExchange listing for FileExporter by Salesforce Labs and I think this is the actual software FileExporter and it stopped working with the TLS 1.0 deprecation.

Enough of small talk, I had to solve the problem so I went ahead and created a very simple Python script that lets you specify the query to find your ContentVersion objects and also filter the ContentDocuments if you need to ignore some ids.

My very specific use case was that I was to export all PDF files with a certain pattern in the filename but only those that were related to a custom object that had a certain status. Given that you can’t do certain queries like this one:

SELECT ContentDocumentId, Title, VersionData, CreatedDate FROM ContentVersion WHERE ContentDocumentId IN (
SELECT ContentDocumentId FROM ContentDocumentLink where LinkedEntityId IN (SELECT Id FROM Custom_Object__c))

It gives you a:

Entity 'ContentDocumentLink' is not supported for semi join inner selects

I had to implement the option for the second query which gives a list of valid ContentDocumentIds to include in the download.

The code is at https://github.com/snorf/salesforce-files-download, feel free to try it out and let me know if it works or doesn’t work out for you.

One more thing, keep in mind that even if you’re an administration with View All you will not see ContentDocuments that doesn’t belong to you or are explicitly shared with you. You’ll need to either change the ownership of the affected files or share them with the user running the Python script.

Ohana!

Talk to the fridge! (using Alexa, Salesforce and Electric Imp)

Long time no blog post, sorry. I have meant to write this post forever but I have managed to avoid it.

Anyways, consider the scenario when you sit in your couch and you wonder:
– “What’s the temperature in my fridge?”
– “Did I close the door?”
– “What’s the humidity?”

You have already installed your Electric Imp hardware in the Fridge (Best Trailhead Badge Ever) and it’s speaking to Salesforce via platform events, you even get a case when the temperature or humidity reaches a threshold or the door is open for too long.

But what if you just want to know the temperature? And you don’t have time to log into Salesforce to find out.

Alexa Skills to the rescue!

Thanks to this awesome blog post:
Building an Amazon Echo Skill with the Flow API

And this GitHub repository:
https://github.com/financialforcedev/alexa-salesforce-flow-skill

And example Flows from here:
https://github.com/financialforcedev/alexa-salesforce-flow-skill-examples

I’ll walk you through what’s needed to speak to your fridge.

I will only show the small pieces you need for setting this up, for details please read the original blog posts.

First of all you need an Alexa Skill, I have created one called Salesforce.

This is the interaction model:

{
  "intents": [
    {
      "intent": "FridgeStatus"
    }
  ]
}

And the Sample Utterances

FridgeStatus How is my fridge

I’ll not go into details about Lambda and the connected app needed, please refer to this documentation:
https://github.com/financialforcedev/alexa-salesforce-flow-skill/wiki/Setup-and-Configuration

The important thing here is the FridgeStatus in the Sample Utterances, you’ll need a flow called FridgeStatus.

Here’s mine:

Going into details:

And creating the response:

The Value is:

Your fridge temperature is {!Temperature} degrees celcius, the humidity is {!Humidity} percent, and the door is {!DoorStatus}

The result sounds like this:

So the next time you wonder about the temperature in the fridge you won’t have to move from the couch, awesome right?

The next step would be to ask Alexa about “What’s the average temperature during the last day?” and calculate the average from the BigObjects holding my temperature reading.

Cheers,
Johan

Visualise Big Object data in a Lightning Component

Good evening,

In my previous post (Upgrade your Electric Imp IoT Trailhead Project to use Big Objects
) I showed how you can use Big Objects to archive data and now I will show how you can visualise the data in a Lightning Component.

So now we have big objects being created but the only way to see them is by executing a SOQL query in the Developer Console (SELECT DeviceId__c, Temperature__c, Humidity__c, ts__c FROM Fridge_Reading_History__b).

I have created a Lightning Component that uses an Apex Class to retrieve the data.

Lets start with a screen shot on how it looks and then post the wall of code

And in Salesforce1

And here’s the code:
Lightning Component




    
    
    
    
    
    
    
    

Controller

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */
({
    doinit : function(component,event,helper){
        var today = new Date();
        component.set("v.today", today.toISOString());
        console.log(document.documentElement);
        component.set("v.width", document.documentElement.clientWidth);
        component.set("v.height", document.documentElement.clientHeight);
        helper.refreshData(component,event,helper);
    },
    refreshData : function(component,event,helper) {
        helper.refreshData(component,event,helper);
    }
})

Helper

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */
({
        addData : function(chart, labels, data) {
            chart.data.labels = labels;
            chart.data.datasets[0] = data[0];
            chart.data.datasets[1] = data[1];
        },
        redrawData : function(component, event, helper, readings, chart, datasets) {
            helper.addData(chart, readings.ts, datasets);
            chart.update();
        },
        displayData : function(component, event, helper, readings) {
            var datasets = [readings.temperature, readings.humidity];
            var chart = window.myLine;
            if(chart != null) {
                helper.redrawData(component,event,helper,readings, chart, datasets);
            }
            var config = {
                type: 'line',
                data: {
                    labels: readings.ts,
                    datasets: [{
                                 label: 'Temperature',
                                 backgroundColor: 'red',
                                 borderColor: 'red',
                                 data: readings.temperature,
                                 yAxisID: "y-axis-1",
                                 fill: false,
                             },
                             {
                                 label: 'Humidity',
                                 backgroundColor: 'blue',
                                 borderColor: 'blue',
                                 data: readings.humidity,
                                 yAxisID: "y-axis-2",
                                 fill: false,
                             }]
                },
                options: {
                    maintainAspectRatio: true,
                    responsive: true,
                    title:{
                        display:false,
                        text:'Temperature'
                    },
                    tooltips: {
                        mode: 'index',
                        intersect: false,
                    },
                    hover: {
                        mode: 'nearest',
                        intersect: true
                    },
                    scales: {
                        yAxes: [{
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "left",
                            id: "y-axis-1",
                        }, {
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "right",
                            id: "y-axis-2",

                            // grid line settings
                            gridLines: {
                                drawOnChartArea: false, // only want the grid lines for one axis to show up
                            },
                        }],
                    }
                }
            };
            var ctx = document.getElementById("temperature").getContext("2d");
            window.myLine = new Chart(ctx, config);
        },
    refreshData : function(component,event,helper) {
        var spinner = component.find('spinner');
        $A.util.removeClass(spinner, "slds-hide");
        var action = component.get("c.getFridgeReadings");
        var endDate = component.get("v.today");
        var results = component.get("v.results");
        action.setParams({
        	deviceId : "2352fc042b6dc0ee",
        	results : results,
        	endDate : endDate
    	});
        action.setCallback(this, function(response){
            var state = response.getState();
            if (state === "SUCCESS") {
                var fridgereadings = JSON.parse(response.getReturnValue());
                helper.displayData(component,event,helper,fridgereadings);
            }
            var spinner = component.find('spinner');
            $A.util.addClass(spinner, "slds-hide");
        });
        $A.enqueueAction(action);
    }
})

And the Apex Class that fetches the data:

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */

public with sharing class FridgeReadingHistoryController {

    public class FridgeReading {
        public String deviceId {get;set;}
        public List ts {get;set;}
        public List doorTs {get;set;}
        public List door {get;set;}
        public List temperature {get;set;}
        public List humidity {get;set;}
        public FridgeReading(String deviceId) {
            this.deviceId = deviceId;
            this.ts = new List();
            this.doorTs = new List();
            this.door = new List();
            this.temperature = new List();
            this.humidity = new List();
        }
        public void addReading(Fridge_Reading_History__b  fr) {
            addReading(fr.Temperature__c, fr.Humidity__c, fr.ts__c, fr.Door__c);
        }
        public void addReading(Decimal t, Decimal h, DateTime timeStamp, String d) {
            String tsString = timeStamp.format('HH:mm dd/MM');
            this.ts.add(tsString);
            temperature.add(t);
            humidity.add(h);
            Integer doorStatus = d == 'open' ? 1 : 0;
            if(door.size() == 0 || doorStatus != door.get(door.size()-1)) {
                door.add(doorStatus);
                doorTs.add(tsString);
            }
        }
    }

    @AuraEnabled
    public static String getFridgeReadings(String deviceId, Integer results, DateTime endDate) {
        if(results == null) {
            results = 200;
        }
        FridgeReading fr = new FridgeReading(deviceId);
        system.debug('RESULTS: ' +results);
        List frhs = [
                SELECT DeviceId__c, Temperature__c, Humidity__c, Door__c, ts__c
                FROM Fridge_Reading_History__b
                WHERE DeviceId__c = :deviceId AND ts__c <: endDate
                LIMIT :Integer.valueof(results)
        ];
        for (Integer i = frhs.size() - 1; i >= 0; i--) {
            Fridge_Reading_History__b frh = frhs[i];
            fr.addReading(frh);
        }
        return JSON.serialize(fr);
    }
}

The component assumes you have Charts.js as a static resource, mine is here.

There are no test cases anywhere and the code is probably not production grade.

The next step would be to use aggregate functions on the Big Objects to show data over a longer period of time.

Cheers,
Johan

Upgrade your Electric Imp IoT Trailhead Project to use Big Objects

I first heard about Big Objects in a webinar and at first I didn’t really see a use case, and it was in BETA so I didn’t care that much but now that it was released in Winter ’18 everything changed.

My favourite Trailhead Badge is still the Electric Imp IoT one and I had thought it would be fun to store the temperature readings over a longer period of time. Since I run my integration in a Developer Editions I have 5 MB of storage available, this is not that much given that I receive between 1 and 2 Platform Events per minute.

Most objects in Salesforce uses 2 KB per object (details here) so with 5 MB I can store about 2500 objects (less actually since I have other objects in the org).

Big Objects gives you a 1 000 000 object limit so this should be enough for about 1 years worth of readings. Big Objects are meant for archiving and you can’t delete them actually so I have no idea what will happen when I hit the limit but I’ll write about it then.

Anyways, there are some limitations on Big Objects:
* You can’t create them from the Web Interface
* You can’t run Triggers/Workflows/Processes on them
* You can’t create a Tab for them

The only way to visualise them is to build a Visualforce Page or a Lightning Component and that’s exactly what I’m going to do in this blog post.

Archiving the data

Starting out, I’m creating the Big Object using the Metadata API. The object looks very similar to a standard object and I actually stole my object definition for a custom object called Fridge_Reading_Daily_History__c. The reason why I had to create that object is that I can’t create a Big Object from a trigger and I want to store every Platform Event.

The Fridge_Reading_Daily_History__c has the same fields as my Platform Event (described here) and I’m going to create a Fridge_Reading_Daily_History__c object from every Platform Event received.

The Big Object definition looks like this:



    Deployed
    
        DeviceId__c
        false
        
        16
        true
        Text
        false
    
    
        Door__c
        false
        
        9
        true
        Text
        false
    
    
        Humidity__c
        false
        
        10
        true
        4
        Number
        false
    
    
        Temperature__c
        false
        
        10
        true
        5
        Number
        false
    
    
        ts__c
        false
        
        true
        DateTime
    
    
    Fridge Readings History

Keep in mind that after you have created it you can’t modify that much of it so you need to remove it (can be done from Setup) and then deploy again.

In my previous post I created a Trigger that updated my SmartFridge__c object for every Platform Event, this works fine but with Winter ’18 you can actually create Processes that handles the Platform Events so I changed this. Basically you create a Process that listens to a Fridge_Reading__e object and finds the SmartFridge__c with the same DeviceId__c.

This is what my process looks like:

I added a criteria to check that no fields were null (I set them as required on my Fridge_Reading_Daily_History__c object)

Then I update my SmartFridge__c object

And create a new Fridge_Reading_Daily_History__c object

So far so good, now I have to make sure I archive my Fridge_Reading_Daily_History__c objects before I run out of space.

After trying different ways to do this (Scheduled Apex) I realised that I can’t archive and delete the objects in the same transaction (it’s in the documentation for Big Objects) and I don’t want to have a scheduled job every hour that archives to Big Object and then another Scheduled Apex job that deletes the Fridge_Reading_Daily_History__c that have been archived.

In the end I settled for a Process on Fridge_Reading_Daily_History__c that runs when an object is created

The process checks if the Name of the object (AutoNumber) is evenly divisible by 50

If so it calls an Invocable Apex function

And the Apex code looks like this:

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */

public class PurgeDailyFridgeReadings {
    @InvocableMethod(label='Purge DTR' description='Purges Daily Temperature Readings')
    public static void purgeDailyTemperatureReadings(List items) {
        archiveTempReadings();
        deleteRecords();
    }

    @future(callout = true)
    public static void deleteRecords() {
        Datetime lastReading = [SELECT DeviceId__c, Temperature__c, ts__c FROM Fridge_Reading_History__b LIMIT 1].ts__c;
        for(List readings :
        [SELECT Id FROM Fridge_Reading_Daily_History__c WHERE ts__c <: lastReading]) {
            delete(readings);
        }
    }

    @future(callout = true)
    public static void archiveTempReadings() {
        Datetime lastReading = [SELECT DeviceId__c, Temperature__c, ts__c FROM Fridge_Reading_History__b LIMIT 1].ts__c;
        for(List toArchive : [SELECT Id,ts__c,DeviceId__c,Door__c,Temperature__c,Humidity__c
        FROM Fridge_Reading_Daily_History__c]) {
            List updates = new List();
            for (Fridge_Reading_Daily_History__c event : toArchive) {
                Fridge_Reading_History__b frh = new Fridge_Reading_History__b();
                frh.DeviceId__c = event.DeviceId__c;
                frh.Door__c = event.Door__c;
                frh.Humidity__c = event.Humidity__c;
                frh.Temperature__c = event.Temperature__c;
                frh.ts__c = event.ts__c;
                updates.add(frh);
            }
            Database.insertImmediate(updates);
        }
    }
}

This class will call the 2 future methods that will archive and delete. Yes they might not run in sequence but it doesn’t really matter. Also you might wonder why there’s a (callout = true) on the futures. I got an Callout Exception when trying to run it so I guess that the data is not stored inside Salesforce but rather in Heroku or similar and it needs to callout to get the data (I got the error on the SELECT line).

Big Objects is probably implemented like External Objects which makes sense.

The Visualisation is done in the next post:
Visualise Big Object data in a Lightning Component

Cheers,
Johan

Uploading CSV data to Einstein Analytics with AWS Lambda (Python)


I have been playing around with Einstein Analytics (the thing they used to call Wave) and I wanted to automate the upload of data since there’s no reason on having dashboards and lenses if the data is stale.

After using Lambda functions against the Bulk API I wanted to have something similar and I found another nice project over at Heroku’s GitHub account called pyAnalyticsCloud

I don’t have a Postgres Database so I ended up using only the uploader.py file and wrote this Lambda function to use it:

from __future__ import print_function

import json
from base64 import b64decode
import boto3
import uuid
import os
import logging
import unicodecsv
from uploader import AnalyticsCloudUploader

logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
file_bucket = os.environ['FILE_BUCKET']
wsdl_file_key = os.environ['WSDL_FILE_KEY']
metadata_file_key = os.environ['METADATA_FILE_KEY']

def bulk_upload(csv_path, wsdl_file_path, metadata_file_path):
    with open(csv_path, mode='r') as csv_file:
        logger.info('Initiating Wave Data upload.')
        logger.debug('Loading metadata')
        metadata = json.loads(open(metadata_file_path, 'r').read())

        logger.debug('Loading CSV data')
        data = unicodecsv.reader(csv_file)
        edgemart = metadata['objects'][0]['name']

        logger.debug('Creating uploader')
        uploader = AnalyticsCloudUploader(metadata, data)
        logger.debug('Logging in to Wave')
        uploader.login(wsdl_file_path, username, password, security_token)
        logger.debug('Uploading data')
        uploader.upload(edgemart)
        logger.info('Wave Data uploaded.')
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        csv_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, csv_path)

        # WSDL file
        wsdl_file_path = '/tmp/{}{}'.format(uuid.uuid4(), wsdl_file_key)
        s3_client.download_file(file_bucket, wsdl_file_key, wsdl_file_path)

        # Metadata file
        metadata_file_path = '/tmp/{}{}'.format(uuid.uuid4(), metadata_file_key)
        s3_client.download_file(file_bucket, metadata_file_key, metadata_file_path)
        return bulk_upload(csv_path, wsdl_file_path, metadata_file_path)

Yes the logging is a bit on the extensive side and make sure to add these environment variables in AWS Lambda:

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
FILE_BUCKET- the bucket in where to find the mapping file
METADATA_FILE_KEY- the path to the metadata file in that bucket (you get this from Einstein Analytics)
WSDL_FILE_KEY - the path to the wsdl partner file in the bucket

I added an S3 trigger that runs this function as soon as a new file is uploaded. It has some issues (crashing with parenthesis in the file name for example) so please don’t use this for a production workload before making it enterprise grade.

Note: The code above only works in Python 2.7

Cheers

Upgrade your Electric Imp IoT Trailhead Project to use Platform Events

As an avid trailblazer I just have to Catch ‘Em All (Trailblazer badges) and the project to integrate Electric Imp in my fridge was a fun one.

Build an IoT Integration with Electric Imp


After buying an USB cable to supply it with power it now runs 24/7 and I get cases all the time, haven’t really tweaked the setup yet.

I have looked at the new Platform Events and I thought that this integration can’t be using a simple upsert operation on an SObject, it’s 2017 for gods sake! Said and done, I set out to change the agent code in the Trailhead project to insert a Platform Event every time it’s time to update to Salesforce.

First of all you need to define your platform event, here is the XML representation of it:



    Deployed
    
        DeviceId__c
        false
        false
        false
        false
        
        16
        true
        Text
        false
    
    
        Door__c
        false
        false
        false
        false
        
        10
        false
        Text
        false
    
    
        Humidity__c
        false
        false
        false
        false
        
        6
        false
        2
        Number
        false
    
    
        Temperature__c
        false
        false
        false
        false
        
        6
        false
        2
        Number
        false
    
    
        ts__c
        false
        false
        false
        false
        
        false
        DateTime
    
    
    Fridge Readings

In short it’s just fields to hold the same values as on the SmartFridge__c object.

The updates to the agent code can be found on my GitHub account here.

When a Platform Event is created it needs to update the SmartFridge__c object to work as before, this is done with a trigger

trigger FridgeReadingTrigger on Fridge_Reading__e (after insert) {
    List updates = new List();
    for (Fridge_Reading__e event : Trigger.New) {
        System.debug('Event DeviceId ' + event.DeviceId__c);
        SmartFridge__c sf = new SmartFridge__c(DeviceId__c = event.DeviceId__c);
        sf.Door__c = event.Door__c;
        sf.Humidity__c = event.Humidity__c;
        sf.Temperature__c = event.Temperature__c;
        sf.ts__c = event.ts__c;
        updates.add(sf);
    }
    upsert updates DeviceId__c;
}

In Winter ’18 you can use process builder on Platform Events but my developer edition is not upgraded until next Saturday.

So I made things a bit more complex by introducing Platform Events and a trigger but I feel better knowing that I use more parts of the platform. Next step will be to use Big Objects to store the readings from the fridge over time and visualize them.

Cheers

Sous Vide Ginger Shots with Chili

Autumn means darkness, less sunlight and people sneezing wherever you go. One way to mitigate this is to make your own ginger shots with chili, honey and lemon.

I’ve gotten this recipe from my mom and I do it regularly.

  • 200 grams of fresh ginger
  • 1 liter of water
  • 1/2 deciliter of honey
  • 2 lemons
  • Chili (optional)

ginger habanero lemon

Set your Sous Vide (I have an Anova) at 60 degrees Celsius.
Shred the ginger and chop the chili and add it together with 1 l of water to a sous vide bag.

shredded ginger


chopped habanero

Put in in the pot for 20 minutes.

Use a strainer to remove ginger and chili, run the liquid through a filter afterwards (I use a metallic coffee filter and it works great).

Add honey and stir.

Wait for it to cool down and add the squeezed lemons.

Store in fridge for up to two weeks and enjoy anytime you feel like boosting your immune defence system. Make sure to shake before serving since the brew will separate.


Enjoy!

Using AWS Lambda functions with the Salesforce Bulk API


One common task when integrating Salesforce with customers system is to import data, either as a one time task or regularly.

This can be done in several ways depending on the inhouse technical level and the simplest way might be to use the Import Wizard or the Data Loader. If you want to do it regularly in a batch fashion and are fortunate enough to have AWS infrastructure available using Lambda functions is an alternative.

Recently I did this as a prototype and I will share my findings here.

I will not go into details about AWS and Lambda, I used this tutorial to get started with Lambda functions but most of it didn’t concern the Salesforce parts but rather AWS specifics like IAM.

I found this Heroku project for using the bulk api.

The full python code looks like this:

from __future__ import print_function
from base64 import b64decode
import boto3
import uuid
import csv
import os
from salesforce_bulk import SalesforceBulk, CsvDictsAdapter
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
mapping_file_bucket = os.environ['MAPPING_FILE_BUCKET']
mapping_file_key = os.environ['MAPPING_FILE_KEY']

def bulk_upload(csv_path, mapping_file_path):
    with open(csv_path, mode='r') as infile:
        logger.info('Trying to login to SalesforceBulk')
        job = None
        try:
            bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
            job = bulk.create_insert_job("Account", contentType='CSV')

            # Mapping file
            mapping_file = open(mapping_file_path, 'rb')
            bulk.post_mapping_file(job, mapping_file.read())

            accounts = csv.DictReader(infile)
            csv_iter = CsvDictsAdapter(iter(accounts))
            batch = bulk.post_batch(job, csv_iter)
            bulk.wait_for_batch(job, batch)
            bulk.close_job(job)
            logger.info('Done. Accounts uploaded.')
        except Exception as e:
            if job:
                bulk.abort_job(job)
            raise e
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, download_path)

        # Mapping file
        mapping_file_path = '/tmp/{}{}'.format(uuid.uuid4(), mapping_file_key)
        s3_client.download_file(mapping_file_bucket, mapping_file_key, mapping_file_path)

        return bulk_upload(download_path, mapping_file_path)

Make sure to add the following environment variables in Lambda before executing

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
MAPPING_FILE_BUCKET - the bucket in where to find the mapping file
MAPPING_FILE_KEY - the path to the mapping file in that bucket

I also added a method (In my own clone of the project here) to be able to provide the mapping file as part of the payload, I’ll make sure to create a pull request for this later.

The nice thing with using the Bulk API is that you get the monitoring directly in Salesforce, just go to to see the status of your job(s).

I haven’t added the listen to S3-trigger yet but it’s the next part of the tutorial so shouldn’t be a problem.

Cheers,
Johan

Trailhead is awesome and gamification totally works!

About this time last year I decided to pursue a career within Salesforce, I was a bit tired of my current job and wanted a change. It was either a backend engineer role at iZettle or becoming a Salesforce Consultant. The consultant role was not new to me since that’s how I started my career. After signing the contract I decided to look at Trailhead since I heard a lot of good things about it.

I took my first badge, Salesforce Platform Basics at 2016-10-04 20:32 UTC and it was quite easy. My goal was to take a bunch before I started working on the 2:nd of January 2017. That didn’t happen but I started doing them from my first day in the office.

Having worked with Salesforce since 5/11/2012 I thought I knew most of the platform but that was far from the truth. Platform Cache, Hierarchical Custom settings, Shield, Communities, Live Agent, etc. I haven’t worked with much of it but today I still know how the features works and if a customer asks me about it I can at give a brief explanation about most things.

Back to the gamification part, at EINS we set a goal for 2017 to have at least 98% of all badges between us in the team. Using partners.salesforce.com for calculating this value is really hard so I set out to build a dashboard for the team so that we could see our score.

After some iterations it looks ok, Lightning Design System makes everything looks great.
Trailhead Tracker

It looks awesome on your phone too

The dashboard has really helped, mostly because people in the team now sees who took a badge over the weekend and when someone passes their total number of badges. This has encouraged everyone to go that extra mile and take that extra badge.

Healthy competition is always good and when you learn things that helps you make a better job while at it it’s definitely a win-win!

Feel free to check out the dashboard at http://trailhead.eins.se/, also if you click a persons name you can see which badges he/she is missing. This makes it easy when you’re looking for a quick badge to take on the subway or the bus.
Trailhead Tracker User Page

Another addition was the #architectjourney pyramid at the bottom, since we’re scraping certifications too
Trailhead Tracker Architect Journy

The last thing we added to the dashboard was Slack notifications when someone completes a badge or gets a Salesforce Certification, of course the first version of this spammed all over our #trailhead channel but that bug is long gone now.
Trailhead Tracker Slack

Exporting the data in CSV and importing it into Wave lets you gain some insights into when people take badges (and when they work)
Badges per Month

So in summary, Trailhead gamification totally works but you need a dashboard.

Cheers,
Johan

PS. My aim is to clean up the code and put it on GitHub when I have the time DS.