Batch Scheduling Problem
Problem
In the Batch Scheduling Problem, a set of tasks must be processed in batches on a set of available resources. Tasks within the same batch are processed together. Consequently, they must have the same duration, the same type, and the same resource type. The resources are disjunctive and can only process one batch of tasks at a time. The resources also have capacities: the number of tasks within each batch on a given resource must not exceed the resource’s capacity. There are precedence relations between the tasks: each task must be processed in an earlier batch than any of its successors. The goal is to find a schedule that minimizes the makespan: the time when all tasks have been processed.
Principles learned
- Use set decision variables to model the batches
- Use interval decision variables to model the time range of each batch
- Use the distinct operator and a lambda function to ensure all tasks within the same batch have the same type
Data
The format of the data files is as follows:
- First line: number of tasks, number of resources
- Second line: for each resource, the maximum capacity
- Next lines: for each task:
- its associated resource type
- its type
- its duration
- its number of successors
- the IDs of each of its successors
Program
The Hexaly Model for the Batch Scheduling Problem uses set decision variables to model the batches scheduled on each resource. The elements in each set represent the indices of the tasks within the corresponding batch. Using the ‘partition‘ operator on each resource’s collection of sets, we ensure that each task is assigned to exactly one batch.
All tasks within the same batch must have the same type. Using the ‘distinct‘ operator and a lambda function, we compute the unordered set of distinct values among the task types in each batch. We can then use the ‘count’ operator to constrain this set to contain only one element, ensuring that all tasks have the same type. We also use the ‘count’ operator to ensure the number of tasks in each batch does not exceed the associated resource’s capacity.
In addition to sets, we also use interval decision variables to model each batch’s time range. To ensure that the batches on each resource do not overlap, we order the resources’ batch intervals with precedence constraints.
Thanks to the ‘find‘ operator, we can retrieve the batch interval containing each task. We can then constrain the length of this interval to be equal to the duration of the task, and enforce the precedence constraints between the tasks.
The objective consists in minimizing the makespan, which is the time when all the tasks have ended.
- Execution
-
hexaly batch_scheduling.hxm inFileName=instances/5_2-000.nfb [hxTimeLimit=] [solFileName=]
use io;
/* Read instance data. */
function input() {
local usage = "Usage: hexaly batch_scheduling.hxm "
+ "inFileName=inputFile [solFileName=outputFile] [hxTimeLimit=timeLimit]";
if (inFileName == nil) throw usage;
local inFile = io.openRead(inFileName);
// Number of tasks
nbTasks = inFile.readInt();
// Number of resources
nbResources = inFile.readInt();
// Maximum capacity of each resource
capacity[r in 0...nbResources] = inFile.readInt();
// Number of tasks assigned to each resource
nbTasksPerResource[r in 0...nbResources] = 0;
// Types of tasks assigned to each resource
typesInResource[r in 0...nbResources] = {};
// Tasks assigned to each resource
tasksInResource[r in 0...nbResources] = {};
for [i in 0...nbTasks] {
// Type of task i
type[i] = inFile.readInt();
// Resource required for task i
resource[i] = inFile.readInt();
// Index of task i on resource[i]
taskIndexInResource[i] = nbTasksPerResource[resource[i]];
// Map from index of task i on resource[i] to task i type
typesInResource[resource[i]][taskIndexInResource[i]] = type[i];
// Map from index of task i on resource[i] to task i id
tasksInResource[resource[i]][taskIndexInResource[i]] = i;
// Increment number of tasks required by this resource
nbTasksPerResource[resource[i]] = nbTasksPerResource[resource[i]] + 1;
// Duration of task i
duration[i] = inFile.readInt();
// Number of tasks that must succeed task i
nbSuccessors[i] = inFile.readInt();
// Task id that must succeed task i
successors[i][s in 0...nbSuccessors[i]] = inFile.readInt();
}
// Longest possible time horizon
timeHorizon = sum[t in 0...nbTasks](duration[t]);
inFile.close();
}
/* Declare the optimization model */
function model() {
// For each resource, the contents of each batch of tasks performed
batchContent[r in 0...nbResources][b in 0...nbTasksPerResource[r]] <- set(nbTasksPerResource[r]);
// All tasks are assigned to a batch
for [r in 0...nbResources] {
constraint partition(batchContent[r]);
}
// Each batch must consist of tasks with the same type
for [r in 0...nbResources][batch in batchContent[r]] {
constraint count(distinct(batch, i => typesInResource[r][i])) <= 1;
}
// Each batch cannot exceed the maximum capacity of the resource
for [r in 0...nbResources][batch in batchContent[r]] {
constraint count(batch) <= capacity[r];
}
// Interval decisions: time range of each batch of tasks
batchInterval[r in 0...nbResources][n in 0...nbTasksPerResource[r]] <- interval(0, timeHorizon);
// Non-overlap of batch intervals on the same resource
for [r in 0...nbResources][b in 1...nbTasksPerResource[r]] {
constraint batchInterval[r][b-1] < batchInterval[r][b];
}
// Interval decisions: time range of each task
for [t in 0...nbTasks] {
// Retrieve the batch index and resource for this task
local r = resource[t];
local b <- find(batchContent[r], taskIndexInResource[t]);
// Task interval interval associated with task t
taskInterval[t] <- batchInterval[r][b];
}
// Task durations
for [t in 0...nbTasks] {
constraint length(taskInterval[t]) == duration[t];
}
// Precedence constraints between tasks
for [t in 0...nbTasks][s in successors[t]] {
constraint taskInterval[t] < taskInterval[s];
}
// Makespan: end of the last task
makespan <- max[t in 0...nbTasks](end(taskInterval[t]));
minimize makespan;
}
/* Parameterize the solver */
function param() {
if (hxTimeLimit == nil) hxTimeLimit = 15;
}
/* Write the solution in a file with the following format:
* - makespan
- machine number
- preceeding lines are the ordered intervals of the tasks
for the corresponding machine number */
function output() {
if (outFileName != nil) {
outFile = io.openWrite(outFileName);
outFile.println(makespan.value);
for [r in 0...nbResources] {
outFile.println(r);
for [b in 0...nbTasksPerResource[r]] {
local t = tasksInResource[r][b];
outFile.println(t + " " + taskInterval[t].value.start + " " + taskInterval[t].value.end);
}
}
outFile.close();
}
}
- Execution (Windows)
-
set PYTHONPATH=%HX_HOME%\bin\pythonpython batch_scheduling.py instances\5_2-000.nfb
- Execution (Linux)
-
export PYTHONPATH=/opt/hexaly_13_0/bin/pythonpython batch_scheduling.py instances/5_2-000.nfb
import hexaly.optimizer
import sys
def read_instance(filename):
# The import files follow the "Taillard" format
with open(filename, 'r') as f:
lines = f.readlines()
first_line = lines[0].split()
# Number of tasks
nb_tasks = int(first_line[0])
# Number of resources
nb_resources = int(first_line[1])
second_line = lines[1].split()
# Capacity of each resource
capacity = [int(l) for l in second_line]
# Remaining lines contain task information at each row
nb_tasks_per_resource = [0 for _ in range(nb_resources)]
types_in_resource = [[] for _ in range(nb_resources)]
tasks_in_resource = [[] for _ in range(nb_resources)]
task_index_in_resource = []
types, resources, duration, nb_successors = [], [], [], []
successors = [[] for _ in range(nb_tasks)]
for i in range(nb_tasks):
# Extract dataset line related to this task
task_line = i + 2
task_information = lines[task_line].split()
# Type of task i
types.append(int(task_information[0]))
# Resource required for task i
resources.append(int(task_information[1]))
# Index of task i on resource[i]
task_index_in_resource.append(nb_tasks_per_resource[resources[i]])
# Map from name of task i on resource[i] to task i type
types_in_resource[resources[i]].append(types[i])
# Map from name of task i on resource[i] to task i
tasks_in_resource[resources[i]].append(i)
# Increment number of tasks required by this resource
nb_tasks_per_resource[resources[i]] += 1
# Task duration
duration.append(int(task_information[2]))
# Number of successors of this task
nb_successors.append(int(task_information[3]))
# Tasks that must succeed current task
for succeeding_task in task_information[4:]:
successors[i].append(int(succeeding_task))
# Trivial time horizon
time_horizon = sum(duration[t] for t in range(nb_tasks))
return nb_tasks, nb_resources, capacity, types, resources, duration, \
nb_successors, successors, nb_tasks_per_resource, \
task_index_in_resource, types_in_resource, tasks_in_resource, \
nb_tasks_per_resource, time_horizon
def main(instance_file, output_file, time_limit):
nb_tasks, nb_resources, capacity, types, resources, duration, \
nb_successors, successors, nb_tasks_per_resource, \
task_index_in_resource, types_in_resource, tasks_in_resource, \
nb_tasks_per_resource, time_horizon \
= read_instance(instance_file)
with hexaly.optimizer.HexalyOptimizer() as optimizer:
#
# Declare the optimization model
#
model = optimizer.model
# For each resource, the contents of each batch of tasks performed
batch_content = [[model.set(nb_tasks_per_resource[r])
for b in range(nb_tasks_per_resource[r])]
for r in range(nb_resources)]
# Create HexalyOptimizer arrays in order to be able to access them with "at" operators
batch_content_arrays = [model.array(batch_content[r]) for r in range(nb_resources)]
# All tasks are assigned to a batch
for r in range(nb_resources):
model.constraint(model.partition(batch_content_arrays[r]))
# Each batch must consist of tasks with the same type
types_in_resource_array = model.array(types_in_resource)
for r in range(nb_resources):
resource_type_lambda = model.lambda_function(lambda i: types_in_resource_array[r][i])
for batch in batch_content[r]:
model.constraint(model.count( model.distinct( batch, resource_type_lambda ) ) <= 1)
# Each batch cannot exceed the maximum capacity of the resource
for r in range(nb_resources):
for batch in batch_content[r]:
model.constraint(model.count(batch) <= capacity[r])
# Interval decisions: time range of each batch of tasks
batch_interval = [[model.interval(0, time_horizon)
for _ in range(nb_tasks_per_resource[r])]
for r in range(nb_resources)]
batch_interval_arrays = [model.array(batch_interval[r]) for r in range(nb_resources)]
# Non-overlap of batch intervals on the same resource
for r in range(nb_resources):
for b in range(1, nb_tasks_per_resource[r]):
model.constraint(batch_interval[r][b-1] < batch_interval[r][b])
# Interval decisions: time range of each task
task_interval = [None for _ in range(nb_tasks)]
for t in range(nb_tasks):
# Retrieve the batch index and resource for this task
r = resources[t]
b = model.find( batch_content_arrays[r], task_index_in_resource[t] )
# Task interval interval associated with task t
task_interval[t] = batch_interval_arrays[r][b]
# Task durations
for t in range(nb_tasks):
model.constraint(model.length(task_interval[t]) == duration[t])
# Precedence constraints between tasks
for t in range(nb_tasks):
for s in successors[t]:
model.constraint( task_interval[t] < task_interval[s])
# Makespan: end of the last task
makespan = model.max([model.end(model.at(batch_interval_arrays[r], i))
for i in range(nb_tasks_per_resource[r])
for r in range(nb_resources)])
model.minimize(makespan)
model.close()
# Parametrize the optimizer
optimizer.param.time_limit = time_limit
optimizer.solve()
#
# Write the solution in a file with the following format:
# - makespan
# - machine number
# - preceeding lines are the ordered intervals of the tasks
# for the corresponding machine number
if output_file is not None:
with open(output_file, 'w') as f:
f.write(str(makespan.value) + "\n")
for r in range(nb_resources):
f.write(str(r) + "\n")
for b in range(nb_tasks_per_resource[r]):
t = tasks_in_resource[r][b]
line = str(t) + " " + str(task_interval[t].value.start()) + " " + str(task_interval[t].value.end())
f.write(line + "\n")
print("Solution written in file ", output_file)
if __name__=="__main__":
if len(sys.argv) < 2:
print(
"Usage: python batch_scheduling.py instance_file [output_file] [time_limit]")
sys.exit(1)
instance_file = sys.argv[1]
output_file = sys.argv[2] if len(sys.argv) >= 3 else None
time_limit = int(sys.argv[3]) if len(sys.argv) >= 4 else 60
main(instance_file, output_file, time_limit)
- Compilation / Execution (Windows)
-
cl /EHsc batch_scheduling.cpp -I%HX_HOME%\include /link %HX_HOME%\bin\hexaly130.libbatch_scheduling instances\5_2-000.nfb
- Compilation / Execution (Linux)
-
g++ batch_scheduling.cpp -I/opt/hexaly_13_0/include -lhexaly130 -lpthread -o batch_scheduling./batch_scheduling instances\5_2-000.nfb
#include "optimizer/hexalyoptimizer.h"
#include <algorithm>
#include <fstream>
#include <iostream>
#include <limits>
#include <numeric>
#include <vector>
using namespace hexaly;
using namespace std;
class BatchScheduling {
private:
// Hexaly Optimizer
HexalyOptimizer optimizer;
// Decision variables: collection of sets (representing batches)
vector<vector<HxExpression>> batchContent;
// Decision variables: intervals of batches on a resource
vector<vector<HxExpression>> batchInterval;
// Decision variables: interval of a task
vector<HxExpression> taskInterval;
// Objective = minimize the makespan: end of the last task of the last job
HxExpression makespan;
public:
BatchScheduling() {}
// Number of tasks
int nbTasks;
// Number of resources
int nbResources;
// Capacity of each resource
vector<int> capacity;
// Number of tasks assigned to each resource
vector<int> nbTasksPerResource;
// Types of tasks assigned to each resource
vector<vector<int>> typesInResources;
// Tasks assigned to each resource
vector<vector<int>> tasksInResource;
// Index of task i on resource[i]
vector<int> taskIndexInResource;
// Type of task i
vector<int> type;
// Resource required for task i
vector<int> resource;
// Duration of task i
vector<int> duration;
// Number of tasks that must succeed task i
vector<int> nbSuccessors;
// Task ids that must succeed task i
vector<vector<int>> successors;
// Longest possible time horizon
int timeHorizon{0};
// Read input file
void readInstance(const string& fileName) {
ifstream infile;
infile.exceptions(ifstream::failbit | ifstream::badbit);
infile.open(fileName.c_str());
// first line has number of tasks, number of resources
infile >> nbTasks;
infile >> nbResources;
// second line has the capacity of each resource
capacity.resize(nbResources);
for (int i = 0; i < nbResources; i++) {
infile >> capacity[i];
}
// initalize
nbTasksPerResource.resize(nbResources);
for (int j = 0; j < nbResources; j++) {
typesInResources.push_back(vector<int>());
tasksInResource.push_back(vector<int>());
nbTasksPerResource[j] = 0;
}
type.resize(nbTasks);
resource.resize(nbTasks);
duration.resize(nbTasks);
nbSuccessors.resize(nbTasks);
taskIndexInResource.resize(nbTasks);
// Task information: [type, machine, duration, nbSuccessors, [successors]]
for (int i = 0; i < nbTasks; i++) {
infile >> type[i];
infile >> resource[i];
infile >> duration[i];
infile >> nbSuccessors[i];
// collect which tasks that must succeed task i
successors.push_back(vector<int>());
for (int j = 0; j < nbSuccessors[i]; j++) {
successors[i].push_back(j);
infile >> successors[i][j];
}
// Index of task i on resource[i]
taskIndexInResource[i] = nbTasksPerResource[resource[i]];
// Map from name of task i on resource[i] to task i type
typesInResources[resource[i]].push_back(type[i]);
// Map from name of task i on resource[i] to task i
tasksInResource[resource[i]].push_back(i);
// Incremenet number of tasks required by this resource
nbTasksPerResource[resource[i]] += 1;
// Add task time to the overall trivial time horizon
timeHorizon += duration[i];
}
infile.close();
}
void solve(int timeLimit) {
// Declare the optimization model
HxModel model = optimizer.getModel();
// For each resource, the contents of each batch of tasks performed
batchContent.resize(nbResources);
for (unsigned int r = 0; r < nbResources; r++) {
batchContent[r].resize(nbTasksPerResource[r]);
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
batchContent[r][b] = model.setVar(nbTasksPerResource[r]);
}
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
vector<HxExpression> batchContentArray = vector<HxExpression>(nbResources);
for (unsigned int r = 0; r < nbResources; r++) {
batchContentArray[r] = model.array();
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
batchContentArray[r].addOperand(batchContent[r][b]);
}
}
// All tasks are assigned to a batch
for (unsigned int r = 0; r < nbResources; r++) {
model.constraint(model.partition(batchContentArray[r]));
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
vector<HxExpression> typesInResourcesArray = vector<HxExpression>(nbResources);
for (unsigned int r = 0; r < nbResources; r++) {
typesInResourcesArray[r] = model.array();
for (unsigned int i = 0; i < nbTasksPerResource[r]; i++) {
typesInResourcesArray[r].addOperand(typesInResources[r][i]);
}
}
// Each batch must consist of tasks with the same type
for (unsigned int r = 0; r < nbResources; r++) {
HxExpression resourceTypeLambda = model.createLambdaFunction([&](HxExpression i) {
return typesInResourcesArray[r][i];
});
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
HxExpression batch = batchContent[r][b];
model.constraint(model.count(model.distinct(batch, resourceTypeLambda)) <= 1);
}
}
// Each batch cannot exceed the maximum capacity of the resource
for (unsigned int r = 0; r < nbResources; r++) {
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
model.constraint(model.count(batchContent[r][b]) <= capacity[r]);
}
}
// Interval decisions: time range of each batch of tasks
batchInterval.resize(nbResources);
for (unsigned int r = 0; r < nbResources; r++) {
batchInterval[r].resize(nbTasksPerResource[r]);
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
batchInterval[r][b] = model.intervalVar(0, timeHorizon);
}
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
vector<HxExpression> batchIntervalArray = vector<HxExpression>(nbResources);
for (unsigned int r = 0; r < nbResources; r++) {
batchIntervalArray[r] = model.array();
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
batchIntervalArray[r].addOperand(batchInterval[r][b]);
}
}
// Non-overlap of batch intervals on the same resource
for (unsigned int r = 0; r < nbResources; r++) {
for (unsigned int b = 1; b < nbTasksPerResource[r]; b++) {
model.constraint(batchInterval[r][b-1] < batchInterval[r][b]);
}
}
// Interval decisions: time range of each task
taskInterval.resize(nbTasks);
for (unsigned int t = 0; t < nbTasks; t++) {
// Retrieve the batch index and resource for this task
int r = resource[t];
HxExpression b = model.find(batchContentArray[r], taskIndexInResource[t]);
// Task interval associated with task t
taskInterval[t] = batchIntervalArray[r][b];
}
// Task durations
for (unsigned int t = 0; t < nbTasks; t++) {
model.constraint(model.length(taskInterval[t]) == duration[t]);
}
// Precedence constraints between tasks
for (unsigned int t = 0; t < nbTasks; t++) {
for (unsigned int s = 0; s < nbSuccessors[t]; s++) {
model.constraint(taskInterval[t] < taskInterval[successors[t][s]]);
}
}
// Makespan: end of the last task
makespan = model.max();
for (unsigned int t = 0; t < nbTasks; t++) {
makespan.addOperand(model.end(taskInterval[t]));
}
model.minimize(makespan);
model.close();
// Parameterize the optimizer
optimizer.getParam().setTimeLimit(timeLimit);
optimizer.solve();
}
/*
Write the solution in a file with the following format:
- makespan
- machine number
- preceeding lines are the ordered intervals of the tasks
for the corresponding machine number
*/
void writeSolution(const string& fileName) {
ofstream outfile;
outfile.exceptions(ofstream::failbit | ofstream::badbit);
outfile.open(fileName.c_str());
outfile << makespan.getValue() << endl;
for (unsigned int r = 0; r < nbResources; r++) {
outfile << r << endl;
for (unsigned int b = 0; b < nbTasksPerResource[r]; b++) {
int t = tasksInResource[r][b];
int start = taskInterval[t].getIntervalValue().getStart();
int end = taskInterval[t].getIntervalValue().getEnd();
outfile << t << " " << start << " " << end << endl;
}
}
cout << "Solution written in file " << fileName << endl;
outfile.close();
}
};
int main(int argc, char** argv) {
if (argc < 2) {
cout << "Usage: batch_scheduling instanceFile [outputFile] [timeLimit]" << endl;
exit(1);
}
const char* instanceFile = argv[1];
const char* outputFile = argc > 2 ? argv[2] : NULL;
const char* strTimeLimit = argc > 3 ? argv[3] : "60";
BatchScheduling model;
try {
model.readInstance(instanceFile);
const int timeLimit = atoi(strTimeLimit);
model.solve(timeLimit);
if (outputFile != NULL)
model.writeSolution(outputFile);
return 0;
} catch (const exception& e) {
cerr << "An error occurred: " << e.what() << endl;
return 1;
}
}
- Compilation / Execution (Windows)
-
copy %HX_HOME%\bin\Hexaly.NET.dll .csc BatchScheduling.cs /reference:Hexaly.NET.dllBatchScheduling instances\5_2-000.nfb
using System;
using System.IO;
using Hexaly.Optimizer;
public class BatchScheduling : IDisposable
{
// Number of tasks
private int nbTasks;
// Number of resources
private int nbResources;
// Capacity of each resource
private int[] capacity;
// Number of tasks assigned to each resource
private int[] nbTasksPerResource;
// Types of tasks assigned to each resource
private int[][] typesInResources;
// Tasks assigned to each resource
private int[][] tasksInResource;
// Index of task i on resource[i]
private int[] taskIndexInResource;
// Type of task i
private int[] type;
// Resource required for task i
private int[] resource;
// Duration of task i
private int[] duration;
// Number of tasks that must succeed task i
private int[] nbSuccessors;
// Task names that must succeed task i
private int[][] successors;
// Longest possible time horizon
private int timeHorizon;
// Hexaly Optimizer
private HexalyOptimizer optimizer;
// Decision variables: collection of sets (representing batches)
private HxExpression[][] batchContent;
// Decision variables: intervals of batches on a resource
private HxExpression[][] batchInterval;
// Decision variables: interval of a task
private HxExpression[] taskInterval;
// Objective = minimize the makespan: end of the last task of the last job
private HxExpression makespan;
public BatchScheduling(string fileName)
{
optimizer = new HexalyOptimizer();
}
public static string[] ReadNextLine(StreamReader input) {
return input.ReadLine().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
}
public void ReadInstance(string fileName)
{
using (StreamReader input = new StreamReader(fileName))
{
string[] splitted = ReadNextLine(input);
// first line has number of tasks, number of resources
nbTasks = int.Parse(splitted[0]);
nbResources = int.Parse(splitted[1]);
// second line has the capacity of each resource
capacity = new int[nbResources];
splitted = ReadNextLine(input);
for (int r = 0; r < nbResources; ++r) {
capacity[r] = int.Parse(splitted[r]);
}
// initialize
type = new int[nbTasks];
resource = new int[nbTasks];
duration = new int[nbTasks];
nbSuccessors = new int[nbTasks];
successors = new int[nbTasks][];
taskIndexInResource = new int[nbTasks];
typesInResources = new int[nbResources][];
tasksInResource = new int[nbResources][];
nbTasksPerResource = new int[nbResources];
for (int j = 0; j < nbResources; ++j)
nbTasksPerResource[j] = 0;
// Task information: [type, machine, duration, nbSuccessors, [successors]]
for (int i = 0; i < nbTasks; ++i)
{
splitted = ReadNextLine(input);
type[i] = int.Parse(splitted[0]);
resource[i] = int.Parse(splitted[1]);
duration[i] = int.Parse(splitted[2]);
nbSuccessors[i] = int.Parse(splitted[3]);
// collect which tasks that must succeed task i
successors[i] = new int[nbSuccessors[i]];
for (int j = 0; j < nbSuccessors[i]; j++) {
successors[i][j] = int.Parse(splitted[4+j]);
}
// Index of task i on resource[i]
taskIndexInResource[i] = nbTasksPerResource[resource[i]];
// Incremenet number of tasks required by this resource
nbTasksPerResource[resource[i]] += 1;
// Add task time to the overall trivial time horizon
timeHorizon += duration[i];
}
// Mapping task list to resource task lists
int[] temp_nbTasksPerResource = new int[nbResources];
for (int r = 0; r < nbResources; ++r)
{
typesInResources[r] = new int[nbTasksPerResource[r]];
tasksInResource[r] = new int[nbTasksPerResource[r]];
temp_nbTasksPerResource[r] = 0;
}
for (int i = 0; i < nbTasks; ++i) {
int r = resource[i];
int index = temp_nbTasksPerResource[r];
// Map from name of task i on resource[i] to task i type
typesInResources[r][index] = type[i];
// Map from name of task i on resource[i] to task i
tasksInResource[r][index] = i;
// increment
temp_nbTasksPerResource[r] += 1;
}
}
}
public void Dispose()
{
if (optimizer != null)
optimizer.Dispose();
}
public void Solve(int timeLimit)
{
// Declare the optimization model
HxModel model = optimizer.GetModel();
// For each resource, the contents of each batch of tasks performed
batchContent = new HxExpression[nbResources][];
for (int r = 0; r < nbResources; ++r)
{
batchContent[r] = new HxExpression[nbTasksPerResource[r]];
for (int b = 0; b < nbTasksPerResource[r]; ++b)
batchContent[r][b] = model.Set(nbTasksPerResource[r]);
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] batchContentArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
batchContentArray[r] = model.Array(batchContent[r]);
// All tasks are assigned to a batch
for (int r = 0; r < nbResources; ++r)
model.Constraint( model.Partition(batchContentArray[r]) );
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] typesInResourcesArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
typesInResourcesArray[r] = model.Array(typesInResources[r]);
// Each batch must consist of tasks with the same type
for (int r = 0; r < nbResources; ++r) {
HxExpression resourceTypeLambda = model.LambdaFunction( i =>
{
return typesInResourcesArray[r][i];
});
for (int b = 0; b < nbTasksPerResource[r]; ++b)
model.Constraint(model.Count(
model.Distinct(batchContent[r][b], resourceTypeLambda)) <= 1);
}
// Each batch cannot exceed the maximum capacity of the resource
for (int r = 0; r < nbResources; ++r)
{
for (int b = 0; b < nbTasksPerResource[r]; ++b)
model.Constraint(model.Count(batchContent[r][b]) <= capacity[r]);
}
// Interval decisions: time range of each batch of tasks
batchInterval = new HxExpression[nbResources][];
for (int r = 0; r < nbResources; ++r)
{
batchInterval[r] = new HxExpression[nbTasksPerResource[r]];
for (int b = 0; b < nbTasksPerResource[r]; ++b)
batchInterval[r][b] = model.Interval(0, timeHorizon);
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] batchIntervalArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
batchIntervalArray[r] = model.Array(batchInterval[r]);
// Non-overlap of batch intervals on the same resource
for (int r = 0; r < nbResources; ++r)
{
for (int b = 1; b < nbTasksPerResource[r]; ++b)
model.Constraint(batchInterval[r][b-1] < batchInterval[r][b]);
}
// Interval decisions: time range of each task
taskInterval = new HxExpression[nbTasks];
for (int t = 0; t < nbTasks; ++t) {
// Retrieve the batch index and resource for this task
int r = resource[t];
HxExpression b = model.Find(batchContentArray[r], taskIndexInResource[t]);
// Task interval associated with task t
taskInterval[t] = batchIntervalArray[r][b];
}
// Task durations
for (int t = 0; t < nbTasks; ++t)
model.Constraint(model.Length(taskInterval[t]) == duration[t]);
// Precedence constraints between tasks
for (int t = 0; t < nbTasks; ++t)
{
for (int s = 0; s < nbSuccessors[t]; ++s)
model.Constraint(taskInterval[t] < taskInterval[successors[t][s]]);
}
// Makespan: end of the last task
makespan = model.Max();
for (int t = 0; t < nbTasks; ++t)
makespan.AddOperand(model.End(taskInterval[t]));
model.Minimize(makespan);
model.Close();
// Parameterize the optimizer
optimizer.GetParam().SetTimeLimit(timeLimit);
optimizer.Solve();
}
/* Write the solution in a file with the following format:
* - makespan
* - machine number
* - preceeding lines are the ordered intervals of the tasks
for the corresponding machine number */
public void WriteSolution(string fileName)
{
using (StreamWriter output = new StreamWriter(fileName))
{
Console.WriteLine("Solution written in file " + fileName);
output.WriteLine(makespan.GetValue());
for (int r = 0; r < nbResources; ++r)
{
output.Write(r);
output.WriteLine();
for (int b = 0; b < nbTasksPerResource[r]; ++b)
{
int t = tasksInResource[r][b];
long start = taskInterval[t].GetIntervalValue().Start();
long end = taskInterval[t].GetIntervalValue().End();
output.Write(t + " " + start + " " + end);
output.WriteLine();
}
}
}
}
public static void Main(string[] args)
{
if (args.Length < 1)
{
Console.WriteLine("Usage: batching_scheduling.cs instanceFile [outputFile] [timeLimit]");
System.Environment.Exit(1);
}
string instanceFile = args[0];
string outputFile = args.Length > 1 ? args[1] : null;
string strTimeLimit = args.Length > 2 ? args[2] : "60";
using (BatchScheduling model = new BatchScheduling(instanceFile))
{
model.ReadInstance(instanceFile);
model.Solve(int.Parse(strTimeLimit));
if (outputFile != null)
model.WriteSolution(outputFile);
}
}
}
- Compilation / Execution (Windows)
-
javac BatchScheduling.java -cp %HX_HOME%\bin\hexaly.jarjava -cp %HX_HOME%\bin\hexaly.jar;. BatchScheduling instances\5_2-000.nfb
- Compilation / Execution (Linux)
-
javac BatchScheduling.java -cp /opt/hexaly_13_0/bin/hexaly.jarjava -cp /opt/hexaly_13_0/bin/hexaly.jar:. BatchScheduling instances\5_2-000.nfb
import java.util.*;
import java.io.*;
import com.hexaly.optimizer.*;
public class BatchScheduling {
// Number of tasks
private int nbTasks;
// Number of resources
private int nbResources;
// Capacity of each resource
private int[] capacity;
// Number of tasks assigned to each resource
private int[] nbTasksPerResource;
// Types of tasks assigned to each resource
private int[][] typesInResources;
// Tasks assigned to each resource
private int[][] tasksInResource;
// Index of task i on resource[i]
private int[] taskIndexInResource;
// Type of task i
private int[] type;
// Resource required for task i
private int[] resource;
// Duration of task i
private int[] duration;
// Number of tasks that must succeed task i
private int[] nbSuccessors;
// Task names that must succeed task i
private int[][] successors;
// Longest possible time horizon
private int timeHorizon;
// Hexaly Optimizer
final HexalyOptimizer optimizer;
// Decision variables: collection of sets (representing batches)
private HxExpression[][] batchContent;
// Decision variables: intervals of batches on a resource
private HxExpression[][] batchInterval;
// Decision variables: interval of a task
private HxExpression[] taskInterval;
// Objective = minimize the makespan: end of the last task of the last job
private HxExpression makespan;
public BatchScheduling(HexalyOptimizer optimizer, String fileName) throws IOException
{
this.optimizer = new HexalyOptimizer();
}
private void readInstance(String fileName) throws IOException {
try (Scanner input = new Scanner(new File(fileName))) {
// first line has number of tasks, number of resources
nbTasks = input.nextInt();
nbResources = input.nextInt();
// second line has the capacity of each resource
capacity = new int[nbResources];
for (int r = 0; r < nbResources; ++r) {
capacity[r] = input.nextInt();
}
// initialize
type = new int[nbTasks];
resource = new int[nbTasks];
duration = new int[nbTasks];
nbSuccessors = new int[nbTasks];
successors = new int[nbTasks][];
taskIndexInResource = new int[nbTasks];
typesInResources = new int[nbResources][];
tasksInResource = new int[nbResources][];
nbTasksPerResource = new int[nbResources];
for (int j = 0; j < nbResources; ++j)
nbTasksPerResource[j] = 0;
// Task information: [type, machine, duration, nbSuccessors, [successors]]
for (int i = 0; i < nbTasks; i++) {
type[i] = input.nextInt();
resource[i] = input.nextInt();
duration[i] = input.nextInt();
nbSuccessors[i] = input.nextInt();
// collect which tasks that must succeed task i
successors[i] = new int[nbSuccessors[i]];
for (int j = 0; j < nbSuccessors[i]; j++) {
successors[i][j] = input.nextInt();
}
// Index of task i on resource[i]
taskIndexInResource[i] = nbTasksPerResource[resource[i]];
// Incremenet number of tasks required by this resource
nbTasksPerResource[resource[i]] += 1;
// Add task time to the overall trivial time horizon
timeHorizon += duration[i];
}
// Mapping task list to resource task lists
int[] temp_nbTasksPerResource = new int[nbResources];
for (int r = 0; r < nbResources; ++r)
{
typesInResources[r] = new int[nbTasksPerResource[r]];
tasksInResource[r] = new int[nbTasksPerResource[r]];
temp_nbTasksPerResource[r] = 0;
}
for (int i = 0; i < nbTasks; ++i) {
int r = resource[i];
int index = temp_nbTasksPerResource[r];
// Map from name of task i on resource[i] to task i type
typesInResources[r][index] = type[i];
// Map from name of task i on resource[i] to task i
tasksInResource[r][index] = i;
// increment
temp_nbTasksPerResource[r] += 1;
}
}
}
public void solve(int timeLimit) {
// Declare the optimization model
HxModel model = optimizer.getModel();
// For each resource, the contents of each batch of tasks performed
batchContent = new HxExpression[nbResources][];
for (int r = 0; r < nbResources; ++r)
{
batchContent[r] = new HxExpression[nbTasksPerResource[r]];
for (int b = 0; b < nbTasksPerResource[r]; ++b)
batchContent[r][b] = model.setVar(nbTasksPerResource[r]);
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] batchContentArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
batchContentArray[r] = model.array(batchContent[r]);
// All tasks are assigned to a batch
for (int r = 0; r < nbResources; ++r)
model.constraint(model.partition(batchContentArray[r]));
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] typesInResourcesArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
typesInResourcesArray[r] = model.array(typesInResources[r]);
// Each batch must consist of tasks with the same type
for (int r = 0; r < nbResources; ++r) {
final int rL = r;
HxExpression resourceTypeLambda = model.lambdaFunction( i -> {
return model.at(typesInResourcesArray[rL], i);
});
for (int b = 0; b < nbTasksPerResource[r]; ++b)
model.constraint(model.leq(model.count(model.distinct(batchContent[r][b], resourceTypeLambda)), 1));
}
// Each batch cannot exceed the maximum capacity of the resource
for (int r = 0; r < nbResources; ++r)
{
for (int b = 0; b < nbTasksPerResource[r]; ++b)
model.constraint(model.leq(model.count(batchContent[r][b]), capacity[r]));
}
// Interval decisions: time range of each batch of tasks
batchInterval = new HxExpression[nbResources][];
for (int r = 0; r < nbResources; ++r)
{
batchInterval[r] = new HxExpression[nbTasksPerResource[r]];
for (int b = 0; b < nbTasksPerResource[r]; ++b)
batchInterval[r][b] = model.intervalVar(0, timeHorizon);
}
// Create HexalyOptimizer arrays in order to be able to access them with "at" operators
HxExpression[] batchIntervalArray = new HxExpression[nbResources];
for (int r = 0; r < nbResources; ++r)
batchIntervalArray[r] = model.array(batchInterval[r]);
// Non-overlap of batch intervals on the same resource
for (int r = 0; r < nbResources; ++r)
{
for (int b = 1; b < nbTasksPerResource[r]; ++b)
model.constraint(model.lt(batchInterval[r][b-1], batchInterval[r][b]));
}
// Interval decisions: time range of each task
taskInterval = new HxExpression[nbTasks];
for (int t = 0; t < nbTasks; ++t) {
// Retrieve the batch index and resource for this task
int r = resource[t];
HxExpression b = model.find(batchContentArray[r], taskIndexInResource[t]);
// Task interval associated with task t
taskInterval[t] = model.at(batchIntervalArray[r], b);
}
// Task durations
for (int t = 0; t < nbTasks; ++t)
model.constraint(model.eq(model.length(taskInterval[t]), duration[t]));
// Precedence constraints between tasks
for (int t = 0; t < nbTasks; ++t)
{
for (int s = 0; s < nbSuccessors[t]; ++s)
model.constraint(model.lt(taskInterval[t], taskInterval[successors[t][s]]));
}
// Makespan: end of the last task
makespan = model.max();
for (int t = 0; t < nbTasks; ++t)
makespan.addOperand(model.end(taskInterval[t]));
model.minimize(makespan);
model.close();
// Parameterize the optimizer
optimizer.getParam().setTimeLimit(timeLimit);
optimizer.solve();
}
/*
* Write the solution in a file with the following format:
* - makespan
* - machine number
* - preceeding lines are the ordered intervals of the tasks
for the corresponding machine number
*/
public void writeSolution(String fileName) throws IOException {
try (PrintWriter output = new PrintWriter(fileName)) {
System.out.println("Solution written in file " + fileName);
output.write(makespan.getValue() + "\n");
for (int r = 0; r < nbResources; ++r) {
output.write(r + "\n");
for (int b = 0; b < nbTasksPerResource[r]; ++b) {
int t = tasksInResource[r][b];
long start = taskInterval[t].getIntervalValue().getStart();
long end = taskInterval[t].getIntervalValue().getEnd();
output.write(t + " " + start + " " + end + "\n");
}
}
}
}
public static void main(String[] args) {
if (args.length < 1) {
System.out.println("Usage: java BatchScheduling instanceFile [outputFile] [timeLimit]");
System.exit(1);
}
String instanceFile = args[0];
String outputFile = args.length > 1 ? args[1] : null;
String strTimeLimit = args.length > 2 ? args[2] : "60";
try (HexalyOptimizer optimizer = new HexalyOptimizer()) {
BatchScheduling model = new BatchScheduling(optimizer, instanceFile);
model.readInstance(instanceFile);
model.solve(Integer.parseInt(strTimeLimit));
if (outputFile != null) {
model.writeSolution(outputFile);
}
} catch (Exception ex) {
System.err.println(ex);
ex.printStackTrace();
System.exit(1);
}
}
}