I think everyone is familiar with the type of Auction insights report in Google Ads.

Over the years, it looked so poor, and gave only a general idea of what position you are in relative to competitors.
It is important to remember that the report is relative. It shows data from the auctions you participated in. If for some reason your ad was stopped and you did not compete for the impression, then the data about those auctions will not be included in your report.
However, some time ago Google added the ability to view Auction insights data in the standard report builder. And it turned out that there is much more data there:

Now we have a lot of data, but it’s not very convenient to work with such a table. You need to organize data storage, build reports on it, and update them. I decided to do this in Google Data Studio.
Unfortunately, the report can’t be uploaded via the API, so I wrote a script in Google Apps that will update the data in the report.
Data collection
A shared folder was created on Google Drive. Any of the traffic managers who have access to the report and folder can upload the report’s csv file in the Google Ads interface and upload it to this shared folder. The script checks the folder once an hour, and if it finds a report file in it, it uploads it to Google BigQuery.
You won’t be able to upload the file directly.You need to convert it to the correct format. Here is the file processing function:
function main() {
var date = Utilities.formatDate(new Date(), 'GMT', 'yyyyMMdd');
// folder for reports
var folderCSV = DriveApp.getFolderById('*********************************');
var files = folderCSV.getFilesByType(MimeType.CSV);
while (files.hasNext()) {
var file = files.next();
var filename = file.getName();
// processed files will be renamed and deleted, this is a filter so as not to get confused
if (filename.indexOf('__') == -1) {
var data = file.getBlob().getDataAsString();
var arr = Utilities.parseCsv(data);
arr.splice(0, 3);
// cut headings and log the number of lines
Logger.log(arr.length);
var uniqueArr = [];
for (var i = 0; i < arr.length; i++) {
uniqueArr.push([arr[i][1], 0]);
}
uniqueArr = unique(uniqueArr);
var filterArr = [];
// and counted the number of unique domains in the report
Logger.log(uniqueArr.length);
for (var d = 0; d < uniqueArr.length; d++) {
var line = uniqueArr[d];
for (var l = 0; l < arr.length; l++) {
if (uniqueArr[d][0] == arr[l][1]) {
line[1] = +line[1] + +1;
}
}
// in BigQuery, we will add data only for domains that appear in the report more than 100 times
if (line[1] >= 100) {
filterArr.push(line[0]);
}
}
var filteredArr = [];
for (var f = 0; f < filterArr.length; f++) {
for (var l = 0; l < arr.length; l++) {
if (filterArr[f] == arr[l][1]) {
try {
// everything is beautifully formatted and put into an array
arr[l][2] = Utilities.formatDate(new Date(arr[l][2]), "GMT", "yyyy-MM-dd'T'HH:mm:ss'Z'");
arr[l][3] = arr[l][3].replace(/\%/i, '').replace(/<10/i, '1');
arr[l][4] = arr[l][4].replace(/\%/i, '');
arr[l][5] = arr[l][5].replace(/\%/i, '');
arr[l][6] = arr[l][6].trim().replace(/\-\-/i, '0');
filteredArr.push(arr[l].join(','));
} catch (e) {
Logger.log(e);
}
}
}
}
// Collecting a csv file from the array for uploading to BigQuery
var newCSVstring = filteredArr.join('\r\n');
var newFile = folderCSV.createFile('__AU_' + date + '.csv', newCSVstring);
if (newFile) {
try {
file.setName('___DEL__' + date + '.csv');
file.setTrashed(true);
} catch (e) {
Logger.log(e);
}
var fileId = newFile.getId();
// delete the original file, and send the formatted and cleaned file to BigQuery
loadCsv(fileId);
}
} else {
try {
file.setName('___DEL__' + date + '.csv');
file.setTrashed(true);
} catch (e) {
Logger.log(e);
}
}
break;
}
}
function unique(arr) { // remove duplicates
var tmp = {};
return arr.filter(function (a) {
return a in tmp ? 0 : tmp[a] = 1;
});
}
For uploading to BigQuery, we’ll create a separate function. In it, you will need to specify a pre-prepared project and dataset in which the script will create tables with reports.
function loadCsv(csvFileId) {
var projectId = '***********'; // Project name
var datasetId = '***********'; // dataset name
var date = Utilities.formatDate(new Date(), 'GMT', 'yyyyMMdd');
var tableId = 'AU_IS_' + date; // table name
var table = {
tableReference: {
projectId: projectId,
datasetId: datasetId,
tableId: tableId
},
schema: {
fields: [
{
name: 'account', // if you do not have an MCC report, you will not need this field
type: 'STRING'
},
{
name: 'domain',
type: 'STRING'
},
{
name: 'keyword',
type: 'STRING'
},
{
name: 'date',
type: 'TIMESTAMP'
},
{
name: 'search_impr_share',
type: 'FLOAT'
},
{
name: 'top_of_page_rate',
type: 'FLOAT'
},
{
name: 'abs_top_of_page_rate',
type: 'FLOAT'
},
{
name: 'position_above_rate',
type: 'FLOAT'
}
]
}
};
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
var file = DriveApp.getFileById(csvFileId);
var data = file.getBlob().setContentType('application/octet-stream');
// Create a job to load data
var job = {
configuration: {
load: {
destinationTable: {
projectId: projectId,
datasetId: datasetId,
tableId: tableId
},
skipLeadingRows: 1
}
}
};
job = BigQuery.Jobs.insert(job, projectId, data);
Logger.log('Load job started. Check on the status of it here: ' + 'https://bigquery.cloud.google.com/jobs/%s', projectId);
}
That’s all, the main() function must be set to run every hour using a trigger, and data from the folder will automatically get into BigQuery.
Data visualization
I decided to visualize the report in Data Studio.
First, we need to create a data source. In the list of suggested connectors, select BigQuery, and then create a request via custom query:
SELECT * FROM `project-name.dataset_name.AU_IS_*`
This query will collect data for a report from multiple tables, allowing you to add tables with data on the fly.
Go to setting up fields:

For fields with competitor parameters, we set aggregation to average.
And building the report display:

It is most convenient to use a combination of a smoothed graph and weekly data output.
Additionally, you can easily create a report in the «competitors by key»format:

Where you can get a clear picture of the competition for a keyword (or all phrases with a word occurrence) for the period you are interested in.
Or the «competitor’s keys» report, where you can see how much the competitor has invaded your territory, and how hard it is trying to defeat you:

This, of course, does not reveal the entire semantic core of the competitor, but only the part of it in which you intersected in your auctions. However, you can often draw quite useful conclusions about the bidding strategy on the domain you are interested in based on the composition of keywords and their positions.
How else you can use this data, read in a separate article