Mass deleting picklist values in Salesforce with AJAX javascript hack (2018 version)

More than 5 years ago I wrote an article on how to Mass delete Picklist values in Salesforce, this is still my most visited article and I have been meaning to get back to it for years. At least now it seems like it will be a part of the standard functionality sometime in the near future (mass delete picklist values (setup)) but today I had to do this at a customer so I had to solve it once again.

I tried using my old script but ended up with errors, Lightning doesn’t like loading external JavaScript things into itself. I respect that and switched to Classic, this is a Sysadmin Only exercise anyways.

Aura Content Security Policy directive

It works out of the box in Classic but it’s very quiet about what’s happening and you don’t really know what’s happening. Also if you accidentally clicked the bookmark in a production environment you’re going to have a bad time

Fuck It, We’ll Do It Live

Updated JavaScript looks like this:


var allClear = function() {
var links = document.getElementsByTagName("a");
var whatToDelete = prompt("Do you want to Delete 'Active' or 'Inactive' picklist values?'''", "Inactive");
if(!(whatToDelete === "Active" || whatToDelete === "Inactive")) {
    window.alert("Invalid choice, quitting");
} else {
    var onlyInactive = whatToDelete === "Inactive";

    var delLinks = new Array();
    for (var i = 0; i < links.length-1; i++) {
      var link = links[i];

      if(onlyInactive) {
        if(link.innerHTML === "Activate") {
            var link = links[i-1];
        } else {
      if (link.innerHTML == "Del") {

    if(delLinks.length == 0) {
        window.alert("Nothing to delete");
    } else {
        var goAhead = confirm("You're about to delete " + delLinks.length + " picklist values");
        if(goAhead) {
            for (var i = 0; i < delLinks.length; i++) {
              var delLink = delLinks[i].href
              // Synchronous AJAX style
              xmlhttp = new XMLHttpRequest();
    "GET",delLink, false);
              console.log("Deleting #" + i + ": " +  delLink);

            window.setTimeout(allClear, 2000);

You can still load it from where I store it by creating a bookmark with this URL:

javascript:(function()%7Bvar s = document.createElement("script"); s.src = ""; void(document.body.appendChild(s));%7D)()

Clicking it on a Global Value Set Detail page will give you a prompt:

Clicking ok will go ahead and select the inactive Picklist values and prompt again:

Clicking Cancel will abort at all times.
Clicking on on this last Confirmation dialogue will delete your Inactive picklist values and the page will refresh.

If you want to delete Active Picklist values you'll have to change the "Inactive" string to "Active" after clicking the bookmark:

Same thing will happen next, you're asked to confirm:

Picklists are deleted and page reloaded. If you have a lot of values it might take some time so starting up the Developer Console is not a bad idea:

Deleting #262:
Deleting #263:
Deleting #264:
Deleting #265:
Deleting #266:
Deleting #267: 

I deleted 350 picklist values in just over 2 minutes so it will not take forever, next step for this script would be to add a spinning progressbar and some bells and whistles but for now this at least solves the problem.

Since you need to have at least 1 value in a Value Set it will not be able to delete all of the Active picklist values but at least you'll save some mouse clicks.

You'll get a warning in the browser that synchronous XML requests are deprecated:

Running the requests asynchronous works fine but the browser will be super swamped if you're deleting hundreds of picklist values so this one is better. When the day finally comes and support for synchronous XML requests are removed I'll make sure to update this but until then this hack is good enough.

Exporting Salesforce Files (aka ContentDocument)

Last week a client asked me to help out, we had been creating a system that creates PDF files in Salesforce using Drawloop (today known as Nintex Document Generation which is a boring name).

Anyways, we had about 2000 PDF created in the system and after looking into it there doesn’t seem to be a way to download them in bulk. Sure you can use the Dataloader and download them but you’ll get the content in a CSV column and that doesn’t really fly with most customers.

I tried, Realfire and search through every link on Google or at least the first 2 pages and I didn’t find a good way of doing it.

There seems to be an old AppExchange listing for FileExporter by Salesforce Labs and I think this is the actual software FileExporter and it stopped working with the TLS 1.0 deprecation.

Enough of small talk, I had to solve the problem so I went ahead and created a very simple Python script that lets you specify the query to find your ContentVersion objects and also filter the ContentDocuments if you need to ignore some ids.

My very specific use case was that I was to export all PDF files with a certain pattern in the filename but only those that were related to a custom object that had a certain status. Given that you can’t do certain queries like this one:

SELECT ContentDocumentId, Title, VersionData, CreatedDate FROM ContentVersion WHERE ContentDocumentId IN (
SELECT ContentDocumentId FROM ContentDocumentLink where LinkedEntityId IN (SELECT Id FROM Custom_Object__c))

It gives you a:

Entity 'ContentDocumentLink' is not supported for semi join inner selects

I had to implement the option for the second query which gives a list of valid ContentDocumentIds to include in the download.

The code is at, feel free to try it out and let me know if it works or doesn’t work out for you.

One more thing, keep in mind that even if you’re an administration with View All you will not see ContentDocuments that doesn’t belong to you or are explicitly shared with you. You’ll need to either change the ownership of the affected files or share them with the user running the Python script.


Visualise Big Object data in a Lightning Component

Good evening,

In my previous post (Upgrade your Electric Imp IoT Trailhead Project to use Big Objects
) I showed how you can use Big Objects to archive data and now I will show how you can visualise the data in a Lightning Component.

So now we have big objects being created but the only way to see them is by executing a SOQL query in the Developer Console (SELECT DeviceId__c, Temperature__c, Humidity__c, ts__c FROM Fridge_Reading_History__b).

I have created a Lightning Component that uses an Apex Class to retrieve the data.

Lets start with a screen shot on how it looks and then post the wall of code

And in Salesforce1

And here’s the code:
Lightning Component



 * Created by Johan Karlsteen on 2017-10-08.
    doinit : function(component,event,helper){
        var today = new Date();
        component.set("", today.toISOString());
        component.set("v.width", document.documentElement.clientWidth);
        component.set("v.height", document.documentElement.clientHeight);
    refreshData : function(component,event,helper) {


 * Created by Johan Karlsteen on 2017-10-08.
        addData : function(chart, labels, data) {
   = labels;
  [0] = data[0];
  [1] = data[1];
        redrawData : function(component, event, helper, readings, chart, datasets) {
            helper.addData(chart, readings.ts, datasets);
        displayData : function(component, event, helper, readings) {
            var datasets = [readings.temperature, readings.humidity];
            var chart = window.myLine;
            if(chart != null) {
                helper.redrawData(component,event,helper,readings, chart, datasets);
            var config = {
                type: 'line',
                data: {
                    labels: readings.ts,
                    datasets: [{
                                 label: 'Temperature',
                                 backgroundColor: 'red',
                                 borderColor: 'red',
                                 data: readings.temperature,
                                 yAxisID: "y-axis-1",
                                 fill: false,
                                 label: 'Humidity',
                                 backgroundColor: 'blue',
                                 borderColor: 'blue',
                                 data: readings.humidity,
                                 yAxisID: "y-axis-2",
                                 fill: false,
                options: {
                    maintainAspectRatio: true,
                    responsive: true,
                    tooltips: {
                        mode: 'index',
                        intersect: false,
                    hover: {
                        mode: 'nearest',
                        intersect: true
                    scales: {
                        yAxes: [{
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "left",
                            id: "y-axis-1",
                        }, {
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "right",
                            id: "y-axis-2",

                            // grid line settings
                            gridLines: {
                                drawOnChartArea: false, // only want the grid lines for one axis to show up
            var ctx = document.getElementById("temperature").getContext("2d");
            window.myLine = new Chart(ctx, config);
    refreshData : function(component,event,helper) {
        var spinner = component.find('spinner');
        $A.util.removeClass(spinner, "slds-hide");
        var action = component.get("c.getFridgeReadings");
        var endDate = component.get("");
        var results = component.get("v.results");
        	deviceId : "2352fc042b6dc0ee",
        	results : results,
        	endDate : endDate
        action.setCallback(this, function(response){
            var state = response.getState();
            if (state === "SUCCESS") {
                var fridgereadings = JSON.parse(response.getReturnValue());
            var spinner = component.find('spinner');
            $A.util.addClass(spinner, "slds-hide");

And the Apex Class that fetches the data:

 * Created by Johan Karlsteen on 2017-10-08.

public with sharing class FridgeReadingHistoryController {

    public class FridgeReading {
        public String deviceId {get;set;}
        public List ts {get;set;}
        public List doorTs {get;set;}
        public List door {get;set;}
        public List temperature {get;set;}
        public List humidity {get;set;}
        public FridgeReading(String deviceId) {
            this.deviceId = deviceId;
            this.ts = new List();
            this.doorTs = new List();
            this.door = new List();
            this.temperature = new List();
            this.humidity = new List();
        public void addReading(Fridge_Reading_History__b  fr) {
            addReading(fr.Temperature__c, fr.Humidity__c, fr.ts__c, fr.Door__c);
        public void addReading(Decimal t, Decimal h, DateTime timeStamp, String d) {
            String tsString = timeStamp.format('HH:mm dd/MM');
            Integer doorStatus = d == 'open' ? 1 : 0;
            if(door.size() == 0 || doorStatus != door.get(door.size()-1)) {

    public static String getFridgeReadings(String deviceId, Integer results, DateTime endDate) {
        if(results == null) {
            results = 200;
        FridgeReading fr = new FridgeReading(deviceId);
        system.debug('RESULTS: ' +results);
        List frhs = [
                SELECT DeviceId__c, Temperature__c, Humidity__c, Door__c, ts__c
                FROM Fridge_Reading_History__b
                WHERE DeviceId__c = :deviceId AND ts__c <: endDate
                LIMIT :Integer.valueof(results)
        for (Integer i = frhs.size() - 1; i >= 0; i--) {
            Fridge_Reading_History__b frh = frhs[i];
        return JSON.serialize(fr);

The component assumes you have Charts.js as a static resource, mine is here.

There are no test cases anywhere and the code is probably not production grade.

The next step would be to use aggregate functions on the Big Objects to show data over a longer period of time.


Uploading CSV data to Einstein Analytics with AWS Lambda (Python)

I have been playing around with Einstein Analytics (the thing they used to call Wave) and I wanted to automate the upload of data since there’s no reason on having dashboards and lenses if the data is stale.

After using Lambda functions against the Bulk API I wanted to have something similar and I found another nice project over at Heroku’s GitHub account called pyAnalyticsCloud

I don’t have a Postgres Database so I ended up using only the file and wrote this Lambda function to use it:

from __future__ import print_function

import json
from base64 import b64decode
import boto3
import uuid
import os
import logging
import unicodecsv
from uploader import AnalyticsCloudUploader

logger = logging.getLogger()

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
file_bucket = os.environ['FILE_BUCKET']
wsdl_file_key = os.environ['WSDL_FILE_KEY']
metadata_file_key = os.environ['METADATA_FILE_KEY']

def bulk_upload(csv_path, wsdl_file_path, metadata_file_path):
    with open(csv_path, mode='r') as csv_file:'Initiating Wave Data upload.')
        logger.debug('Loading metadata')
        metadata = json.loads(open(metadata_file_path, 'r').read())

        logger.debug('Loading CSV data')
        data = unicodecsv.reader(csv_file)
        edgemart = metadata['objects'][0]['name']

        logger.debug('Creating uploader')
        uploader = AnalyticsCloudUploader(metadata, data)
        logger.debug('Logging in to Wave')
        uploader.login(wsdl_file_path, username, password, security_token)
        logger.debug('Uploading data')
        uploader.upload(edgemart)'Wave Data uploaded.')
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        csv_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, csv_path)

        # WSDL file
        wsdl_file_path = '/tmp/{}{}'.format(uuid.uuid4(), wsdl_file_key)
        s3_client.download_file(file_bucket, wsdl_file_key, wsdl_file_path)

        # Metadata file
        metadata_file_path = '/tmp/{}{}'.format(uuid.uuid4(), metadata_file_key)
        s3_client.download_file(file_bucket, metadata_file_key, metadata_file_path)
        return bulk_upload(csv_path, wsdl_file_path, metadata_file_path)

Yes the logging is a bit on the extensive side and make sure to add these environment variables in AWS Lambda:

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
FILE_BUCKET- the bucket in where to find the mapping file
METADATA_FILE_KEY- the path to the metadata file in that bucket (you get this from Einstein Analytics)
WSDL_FILE_KEY - the path to the wsdl partner file in the bucket

I added an S3 trigger that runs this function as soon as a new file is uploaded. It has some issues (crashing with parenthesis in the file name for example) so please don’t use this for a production workload before making it enterprise grade.

Note: The code above only works in Python 2.7


Using AWS Lambda functions with the Salesforce Bulk API

One common task when integrating Salesforce with customers system is to import data, either as a one time task or regularly.

This can be done in several ways depending on the inhouse technical level and the simplest way might be to use the Import Wizard or the Data Loader. If you want to do it regularly in a batch fashion and are fortunate enough to have AWS infrastructure available using Lambda functions is an alternative.

Recently I did this as a prototype and I will share my findings here.

I will not go into details about AWS and Lambda, I used this tutorial to get started with Lambda functions but most of it didn’t concern the Salesforce parts but rather AWS specifics like IAM.

I found this Heroku project for using the bulk api.

The full python code looks like this:

from __future__ import print_function
from base64 import b64decode
import boto3
import uuid
import csv
import os
from salesforce_bulk import SalesforceBulk, CsvDictsAdapter
import logging
logger = logging.getLogger()

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
mapping_file_bucket = os.environ['MAPPING_FILE_BUCKET']
mapping_file_key = os.environ['MAPPING_FILE_KEY']

def bulk_upload(csv_path, mapping_file_path):
    with open(csv_path, mode='r') as infile:'Trying to login to SalesforceBulk')
        job = None
            bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
            job = bulk.create_insert_job("Account", contentType='CSV')

            # Mapping file
            mapping_file = open(mapping_file_path, 'rb')

            accounts = csv.DictReader(infile)
            csv_iter = CsvDictsAdapter(iter(accounts))
            batch = bulk.post_batch(job, csv_iter)
            bulk.wait_for_batch(job, batch)
  'Done. Accounts uploaded.')
        except Exception as e:
            if job:
            raise e
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, download_path)

        # Mapping file
        mapping_file_path = '/tmp/{}{}'.format(uuid.uuid4(), mapping_file_key)
        s3_client.download_file(mapping_file_bucket, mapping_file_key, mapping_file_path)

        return bulk_upload(download_path, mapping_file_path)

Make sure to add the following environment variables in Lambda before executing

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
MAPPING_FILE_BUCKET - the bucket in where to find the mapping file
MAPPING_FILE_KEY - the path to the mapping file in that bucket

I also added a method (In my own clone of the project here) to be able to provide the mapping file as part of the payload, I’ll make sure to create a pull request for this later.

The nice thing with using the Bulk API is that you get the monitoring directly in Salesforce, just go to to see the status of your job(s).

I haven’t added the listen to S3-trigger yet but it’s the next part of the tutorial so shouldn’t be a problem.


Using SHA-1 to generate passwords

This is probably the worst thing about the Internet, remembering passwords.

Until about a year ago I had the same password for almost every service, sometimes I spiced it up by adding a character in the start representing the service (F for facebook, G for Gmail and so on).

A colleague had a different approach, he used simple “passwords” which he then computed SHA-1 on. He had developed a simple java script that performed the SHA-1 conversion.

I have changed his original script a bit to be able to generate different length passwords (11, 21 and 26 characters).

You can try it out yourself, just add the following url as a bookmark (works in all browsers, I think):


To use it, put a “-” in the password box and press the link, this will give you a glass pane with a password box, just write your simple password (Facebook1234 for example) and choose the length, the script will enter the password in the field where you put the “-“.
If you choose Facebook1234 as your simple password the 21 character SHA-1 will be fc1a17377c7ed19872037.

The convert.js script is YUI-compressed, the original one is convert.js.original.
There is also a form based version if you just want the password in cleartext:

Of course everything is on GitHub if you want to look at it and put it up for yourself. If you don’t have any web hosted you could just put the scripts in the Public part of your Dropbox.

Polygon with holes in aggdraw

Drawing polygons with holes have haunted me for a while and recently I had to do it in Python with PIL.

Googling the subject gives you a hint.

This code:

import os
import Image
import aggdraw

draw = aggdraw.Draw('RGB', (100, 100), 'white')

path = aggdraw.Path()
path.moveto(10, 10)

pen = aggdraw.Brush("black")
draw.path((25, 25), path, pen, None)

img = Image.fromstring('RGB', (100, 100), draw.tostring())
p = os.path.join(os.path.dirname(__file__), 'box.png')
 Gives the following image:
Polygon with hole created with aggdraw


Compiling scipy in 32 bit in a 64 bit environment (el5)

During the last 2 days I’ve been trying to compile an old product in 32 bit mode on a 64 bit Redhat Enterprise Linux 5 environment which should not be that hard.

Python itself is no problem:

TCC="gcc -m32" ./configure

(got information from here)

And this approach works for almost every 3rd party software, except for Scipy.

Scipy contains a lot of FORTRAN code and it wasn’t obvious how to get to understand that it should both build and link with the -m32 flag.

After a lot of trial and error this is what I used:

F90FLAGS="-m32" F77FLAGS="-m32" \
LDFLAGS="-g -Wall -shared -m32 -fPIC" \
$PYTHON config_fc --fcompiler=gnu95 install

There is probably other flags that are better but these worked for me, I think the LDFLAGS is what did it since they are used when g77 is linking the FORTRAN code. The flags also worked fine for numpy.

I will try to write more often but I have had a lot to do recently.

Google Maps polygons with holes

The last couple of weeks I have been experimenting with Google Maps trying to draw filled polygons that look alright. I’m using matplotlib for making the polygons and I’ve figured out that the output from contourf(…) is like a plotting routine where you first get a polygon that should be filled with the current level and the following ones are holes in it.

Before I just draw them all, coloring the holes with a lower color. This forced me to sort the polygons according to size which worked ok but didn’t look good.

In Google Maps v2 there was something called encoded polygons but it seems as if they were removed in v3 (never supported by Chrome anyway).

Anyway, the correct way in v3 is to do like this:

  var paths = [[
    new google.maps.LatLng(38.872886, -77.054720),
    new google.maps.LatLng(38.872602, -77.058046),
    new google.maps.LatLng(38.870080, -77.058604),
    new google.maps.LatLng(38.868894, -77.055664),
    new google.maps.LatLng(38.870598, -77.053346)
  ], [
    new google.maps.LatLng(38.871684, -77.056780),
    new google.maps.LatLng(38.871867, -77.055449),
    new google.maps.LatLng(38.870915, -77.054891),
    new google.maps.LatLng(38.870113, -77.055836),
    new google.maps.LatLng(38.870581, -77.057037)

  function initialize() {
    var map = new google.maps.Map(document.getElementById("map"), {
      zoom: 16,
      center: new google.maps.LatLng(38.8714, -77.0556),
      mapTypeId: google.maps.MapTypeId.SATELLITE

    var poly = new google.maps.Polygon({
      paths: paths,
      strokeWeight: 3,
      fillColor: '#55FF55',
      fillOpacity: 0.5


My project is now online at and here’s a screenshot:

There are some troubles with the polygons when applying the b-splines but I will try to take care of them in the near future.

I have briefly investigated Thrift and ProtoBuf but I’m still not sure that I will gain that much by switching from JSON. Also found something called BSON which is binary JSON. I think the next step will be to add some more parameters, wind direction is probably the hardest since I will have to draw the arrows myself.


Improving Google Maps polygons with b-splines

Google Maps is great, you get an extremely nice background map for free. I know that there are alternatives (Bing, OpenLayers, etc) out there but since I’m running Google App Engine it seems easier to go Google all the way.

I’m plotting polygons and polylines (that’s what weather is about) and it works great but my input data is kind of sparse so the polygons look very rough.

To improve them I’m using b-splines. Found a very nice article here. I just changed the javascript so it works with lat/lon-arrays and the output is an array of google.maps.LatLng.

function bspline(lats, lons) {
    var i, t, ax, ay, bx, by, cx, cy, dx, dy, lat, lon, points;
    points = [];
    // For every point
    for (i = 2; i < lats.length - 2; i++) {
        for (t = 0; t < 1; t += 0.2) {
            ax = (-lats[i - 2] + 3 * lats[i - 1] - 3 * lats[i] + lats[i + 1]) / 6;
            ay = (-lons[i - 2] + 3 * lons[i - 1] - 3 * lons[i] + lons[i + 1]) / 6;
            bx = (lats[i - 2] - 2 * lats[i - 1] + lats[i]) / 2;
            by = (lons[i - 2] - 2 * lons[i - 1] + lons[i]) / 2;
            cx = (-lats[i - 2] + lats[i]) / 2;
            cy = (-lons[i - 2] + lons[i]) / 2;
            dx = (lats[i - 2] + 4 * lats[i - 1] + lats[i]) / 6;
            dy = (lons[i - 2] + 4 * lons[i - 1] + lons[i]) / 6;
            lat = ax * Math.pow(t + 0.1, 3) + bx * Math.pow(t + 0.1, 2) + cx * (t + 0.1) + dx;
            lon = ay * Math.pow(t + 0.1, 3) + by * Math.pow(t + 0.1, 2) + cy * (t + 0.1) + dy;
            points.push(new google.maps.LatLng(lat, lon));
    return points;

There are some more things that you have to do, the original arrays have to be extended by adding the first 2 elements at the back and the last 2 at the front. Or else the polygon will look chopped.

Also the first point may have to be added at the end to close a polyline, a polygon will close itself.

This is what the picture looks like without b-splines:

And with b-splines:

There are some problems with incomplete polygons (polylines that enter and exit the area) but by skipping the first and last point they look ok.

Next step is to use Thrift for communication instead of JSON. Don’t know if it will actually make any difference but I have promised myself to learn either Google Protocol Buffers or Thrift.