\$\begingroup\$
Java - GUI with progressive randomized transformation
I tried a LOT of things, some of them very complicated, then I finally came back to this relatively-simple code:
import java.awt.BorderLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Random;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.Timer;
@SuppressWarnings("serial")
public class CopyColors extends JFrame {
private static final String SOURCE = "spheres";
private static final String PALETTE = "mona";
private static final int COUNT = 10000;
private static final int DELAY = 20;
private static final int LUM_WEIGHT = 10;
private static final double[] F = {0.114, 0.587, 0.299};
private final BufferedImage source;
protected final BufferedImage dest;
private final int sw;
private final int sh;
private final int n;
private final Random r = new Random();
private final JLabel l;
public CopyColors(final String sourceName, final String paletteName) throws IOException {
super("CopyColors by aditsu");
source = ImageIO.read(new File(sourceName + ".png"));
final BufferedImage palette = ImageIO.read(new File(paletteName + ".png"));
sw = source.getWidth();
sh = source.getHeight();
final int pw = palette.getWidth();
final int ph = palette.getHeight();
n = sw * sh;
if (n != pw * ph) {
throw new RuntimeException();
}
dest = new BufferedImage(sw, sh, BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < sh; ++i) {
for (int j = 0; j < sw; ++j) {
final int x = i * sw + j;
dest.setRGB(j, i, palette.getRGB(x % pw, x / pw));
}
}
l = new JLabel(new ImageIcon(dest));
add(l);
final JButton b = new JButton("Save");
add(b, BorderLayout.SOUTH);
b.addActionListener(new ActionListener() {
@Override
public void actionPerformed(final ActionEvent e) {
try {
ImageIO.write(dest, "png", new File(sourceName + "-" + paletteName + ".png"));
} catch (IOException ex) {
ex.printStackTrace();
}
}
});
}
protected double dist(final int x, final int y) {
double t = 0;
double lx = 0;
double ly = 0;
for (int i = 0; i < 3; ++i) {
final double xi = ((x >> (i * 8)) & 255) * F[i];
final double yi = ((y >> (i * 8)) & 255) * F[i];
final double d = xi - yi;
t += d * d;
lx += xi;
ly += yi;
}
double l = lx - ly;
return t + l * l * LUM_WEIGHT;
}
public void improve() {
final int x = r.nextInt(n);
final int y = r.nextInt(n);
final int sx = source.getRGB(x % sw, x / sw);
final int sy = source.getRGB(y % sw, y / sw);
final int dx = dest.getRGB(x % sw, x / sw);
final int dy = dest.getRGB(y % sw, y / sw);
if (dist(sx, dx) + dist(sy, dy) > dist(sx, dy) + dist(sy, dx)) {
dest.setRGB(x % sw, x / sw, dy);
dest.setRGB(y % sw, y / sw, dx);
}
}
public void update() {
l.repaint();
}
public static void main(final String... args) throws IOException {
final CopyColors x = new CopyColors(SOURCE, PALETTE);
x.setSize(800, 600);
x.setLocationRelativeTo(null);
x.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
x.setVisible(true);
new Timer(DELAY, new ActionListener() {
@Override
public void actionPerformed(final ActionEvent e) {
for (int i = 0; i < COUNT; ++i) {
x.improve();
}
x.update();
}
}).start();
}
}
All the relevant parameters are defined as constants at the beginning of the class.
The program first copies the palette image into the source dimensions, then repeatedly chooses 2 random pixels and swaps them if that would get them closer to the source image. "Closer" is defined using a color distance function that calculates the difference between the r, g, b components (luma-weighted) together with the total luma difference, with a greater weight for luma.
It takes just a few seconds for the shapes to form, but a while longer for the colors to come together. You can save the current image at any time. I usually waited about 1-3 minutes before saving.
Results:
Unlike some other answers, these images were all generated using the exact same parameters (other than the file names).
American Gothic palette

Mona Lisa palette

Starry Night palette

The Scream palette

Spheres palette
I think this is the toughest test and everybody should post their results with this palette:

Sorry, I didn't find the river image very interesting so I haven't included it.
I also added a video at https://www.youtube.com/watch?v=_-w3cKL5teM , it shows what the program does (not exactly in real-time but similar) then it shows the gradual pixel movement using Calvin's python script. Unfortunately the video quality is significantly damaged by youtube's encoding/compression.
\$\endgroup\$
16
\$\begingroup\$
Java
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import javax.imageio.ImageIO;
/**
*
* @author Quincunx
*/
public class PixelRearranger {
public static void main(String[] args) throws IOException {
BufferedImage source = ImageIO.read(resource("American Gothic.png"));
BufferedImage palette = ImageIO.read(resource("Mona Lisa.png"));
BufferedImage result = rearrange(source, palette);
ImageIO.write(result, "png", resource("result.png"));
validate(palette, result);
}
public static class MInteger {
int val;
public MInteger(int i) {
val = i;
}
}
public static BufferedImage rearrange(BufferedImage source, BufferedImage palette) {
BufferedImage result = new BufferedImage(source.getWidth(),
source.getHeight(), BufferedImage.TYPE_INT_RGB);
//This creates a list of points in the Source image.
//Then, we shuffle it and will draw points in that order.
List<Point> samples = getPoints(source.getWidth(), source.getHeight());
System.out.println("gotPoints");
//Create a list of colors in the palette.
rgbList = getColors(palette);
Collections.sort(rgbList, rgb);
rbgList = new ArrayList<>(rgbList);
Collections.sort(rbgList, rbg);
grbList = new ArrayList<>(rgbList);
Collections.sort(grbList, grb);
gbrList = new ArrayList<>(rgbList);
Collections.sort(gbrList, gbr);
brgList = new ArrayList<>(rgbList);
Collections.sort(brgList, brg);
bgrList = new ArrayList<>(rgbList);
Collections.sort(bgrList, bgr);
while (!samples.isEmpty()) {
Point currentPoint = samples.remove(0);
int sourceAtPoint = source.getRGB(currentPoint.x, currentPoint.y);
int bestColor = search(new MInteger(sourceAtPoint));
result.setRGB(currentPoint.x, currentPoint.y, bestColor);
}
return result;
}
public static List<Point> getPoints(int width, int height) {
HashSet<Point> points = new HashSet<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
points.add(new Point(x, y));
}
}
List<Point> newList = new ArrayList<>();
List<Point> corner1 = new LinkedList<>();
List<Point> corner2 = new LinkedList<>();
List<Point> corner3 = new LinkedList<>();
List<Point> corner4 = new LinkedList<>();
Point p1 = new Point(width / 3, height / 3);
Point p2 = new Point(width * 2 / 3, height / 3);
Point p3 = new Point(width / 3, height * 2 / 3);
Point p4 = new Point(width * 2 / 3, height * 2 / 3);
newList.add(p1);
newList.add(p2);
newList.add(p3);
newList.add(p4);
corner1.add(p1);
corner2.add(p2);
corner3.add(p3);
corner4.add(p4);
points.remove(p1);
points.remove(p2);
points.remove(p3);
points.remove(p4);
long seed = System.currentTimeMillis();
Random c1Random = new Random(seed += 179426549); //The prime number pushes the first numbers apart
Random c2Random = new Random(seed += 179426549); //Or at least I think it does.
Random c3Random = new Random(seed += 179426549);
Random c4Random = new Random(seed += 179426549);
Dir NW = Dir.NW;
Dir N = Dir.N;
Dir NE = Dir.NE;
Dir W = Dir.W;
Dir E = Dir.E;
Dir SW = Dir.SW;
Dir S = Dir.S;
Dir SE = Dir.SE;
while (!points.isEmpty()) {
putPoints(newList, corner1, c1Random, points, NW, N, NE, W, E, SW, S, SE);
putPoints(newList, corner2, c2Random, points, NE, N, NW, E, W, SE, S, SW);
putPoints(newList, corner3, c3Random, points, SW, S, SE, W, E, NW, N, NE);
putPoints(newList, corner4, c4Random, points, SE, S, SW, E, W, NE, N, NW);
}
return newList;
}
public static enum Dir {
NW(-1, -1), N(0, -1), NE(1, -1), W(-1, 0), E(1, 0), SW(-1, 1), S(0, 1), SE(1, 1);
final int dx, dy;
private Dir(int dx, int dy) {
this.dx = dx;
this.dy = dy;
}
public Point add(Point p) {
return new Point(p.x + dx, p.y + dy);
}
}
public static void putPoints(List<Point> newList, List<Point> listToAddTo, Random rand,
HashSet<Point> points, Dir... adj) {
List<Point> newPoints = new LinkedList<>();
for (Iterator<Point> iter = listToAddTo.iterator(); iter.hasNext();) {
Point p = iter.next();
Point pul = adj[0].add(p);
Point pu = adj[1].add(p);
Point pur = adj[2].add(p);
Point pl = adj[3].add(p);
Point pr = adj[4].add(p);
Point pbl = adj[5].add(p);
Point pb = adj[6].add(p);
Point pbr = adj[7].add(p);
int allChosen = 0;
if (points.contains(pul)) {
if (rand.nextInt(5) == 0) {
allChosen++;
newPoints.add(pul);
newList.add(pul);
points.remove(pul);
}
} else {
allChosen++;
}
if (points.contains(pu)) {
if (rand.nextInt(5) == 0) {
allChosen++;
newPoints.add(pu);
newList.add(pu);
points.remove(pu);
}
} else {
allChosen++;
}
if (points.contains(pur)) {
if (rand.nextInt(3) == 0) {
allChosen++;
newPoints.add(pur);
newList.add(pur);
points.remove(pur);
}
} else {
allChosen++;
}
if (points.contains(pl)) {
if (rand.nextInt(5) == 0) {
allChosen++;
newPoints.add(pl);
newList.add(pl);
points.remove(pl);
}
} else {
allChosen++;
}
if (points.contains(pr)) {
if (rand.nextInt(2) == 0) {
allChosen++;
newPoints.add(pr);
newList.add(pr);
points.remove(pr);
}
} else {
allChosen++;
}
if (points.contains(pbl)) {
if (rand.nextInt(5) == 0) {
allChosen++;
newPoints.add(pbl);
newList.add(pbl);
points.remove(pbl);
}
} else {
allChosen++;
}
if (points.contains(pb)) {
if (rand.nextInt(3) == 0) {
allChosen++;
newPoints.add(pb);
newList.add(pb);
points.remove(pb);
}
} else {
allChosen++;
}
if (points.contains(pbr)) {
newPoints.add(pbr);
newList.add(pbr);
points.remove(pbr);
}
if (allChosen == 7) {
iter.remove();
}
}
listToAddTo.addAll(newPoints);
}
public static List<MInteger> getColors(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
List<MInteger> colors = new ArrayList<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
colors.add(new MInteger(img.getRGB(x, y)));
}
}
return colors;
}
public static int search(MInteger color) {
int rgbIndex = binarySearch(rgbList, color, rgb);
int rbgIndex = binarySearch(rbgList, color, rbg);
int grbIndex = binarySearch(grbList, color, grb);
int gbrIndex = binarySearch(gbrList, color, gbr);
int brgIndex = binarySearch(brgList, color, brg);
int bgrIndex = binarySearch(bgrList, color, bgr);
double distRgb = dist(rgbList.get(rgbIndex), color);
double distRbg = dist(rbgList.get(rbgIndex), color);
double distGrb = dist(grbList.get(grbIndex), color);
double distGbr = dist(gbrList.get(gbrIndex), color);
double distBrg = dist(brgList.get(brgIndex), color);
double distBgr = dist(bgrList.get(bgrIndex), color);
double minDist = Math.min(Math.min(Math.min(Math.min(Math.min(
distRgb, distRbg), distGrb), distGbr), distBrg), distBgr);
MInteger ans;
if (minDist == distRgb) {
ans = rgbList.get(rgbIndex);
} else if (minDist == distRbg) {
ans = rbgList.get(rbgIndex);
} else if (minDist == distGrb) {
ans = grbList.get(grbIndex);
} else if (minDist == distGbr) {
ans = grbList.get(grbIndex);
} else if (minDist == distBrg) {
ans = grbList.get(rgbIndex);
} else {
ans = grbList.get(grbIndex);
}
rgbList.remove(ans);
rbgList.remove(ans);
grbList.remove(ans);
gbrList.remove(ans);
brgList.remove(ans);
bgrList.remove(ans);
return ans.val;
}
public static int binarySearch(List<MInteger> list, MInteger val, Comparator<MInteger> cmp){
int index = Collections.binarySearch(list, val, cmp);
if (index < 0) {
index = ~index;
if (index >= list.size()) {
index = list.size() - 1;
}
}
return index;
}
public static double dist(MInteger color1, MInteger color2) {
int c1 = color1.val;
int r1 = (c1 & 0xFF0000) >> 16;
int g1 = (c1 & 0x00FF00) >> 8;
int b1 = (c1 & 0x0000FF);
int c2 = color2.val;
int r2 = (c2 & 0xFF0000) >> 16;
int g2 = (c2 & 0x00FF00) >> 8;
int b2 = (c2 & 0x0000FF);
int dr = r1 - r2;
int dg = g1 - g2;
int db = b1 - b2;
return Math.sqrt(dr * dr + dg * dg + db * db);
}
//This method is here solely for my ease of use (I put the files under <Project Name>/Resources/ )
public static File resource(String fileName) {
return new File(System.getProperty("user.dir") + "/Resources/" + fileName);
}
static List<MInteger> rgbList;
static List<MInteger> rbgList;
static List<MInteger> grbList;
static List<MInteger> gbrList;
static List<MInteger> brgList;
static List<MInteger> bgrList;
static Comparator<MInteger> rgb = (color1, color2) -> color1.val - color2.val;
static Comparator<MInteger> rbg = (color1, color2) -> {
int c1 = color1.val;
int c2 = color2.val;
c1 = ((c1 & 0xFF0000)) | ((c1 & 0x00FF00) >> 8) | ((c1 & 0x0000FF) << 8);
c2 = ((c2 & 0xFF0000)) | ((c2 & 0x00FF00) >> 8) | ((c2 & 0x0000FF) << 8);
return c1 - c2;
};
static Comparator<MInteger> grb = (color1, color2) -> {
int c1 = color1.val;
int c2 = color2.val;
c1 = ((c1 & 0xFF0000) >> 8) | ((c1 & 0x00FF00) << 8) | ((c1 & 0x0000FF));
c2 = ((c2 & 0xFF0000) >> 8) | ((c2 & 0x00FF00) << 8) | ((c2 & 0x0000FF));
return c1 - c2;
};
static Comparator<MInteger> gbr = (color1, color2) -> {
int c1 = color1.val;
int c2 = color2.val;
c1 = ((c1 & 0xFF0000) >> 16) | ((c1 & 0x00FF00) << 8) | ((c1 & 0x0000FF) << 8);
c2 = ((c2 & 0xFF0000) >> 16) | ((c2 & 0x00FF00) << 8) | ((c2 & 0x0000FF) << 8);
return c1 - c2;
};
static Comparator<MInteger> brg = (color1, color2) -> {
int c1 = color1.val;
int c2 = color2.val;
c1 = ((c1 & 0xFF0000) >> 8) | ((c1 & 0x00FF00) >> 8) | ((c1 & 0x0000FF) << 16);
c2 = ((c2 & 0xFF0000) >> 8) | ((c2 & 0x00FF00) >> 8) | ((c2 & 0x0000FF) << 16);
return c1 - c2;
};
static Comparator<MInteger> bgr = (color1, color2) -> {
int c1 = color1.val;
int c2 = color2.val;
c1 = ((c1 & 0xFF0000) >> 16) | ((c1 & 0x00FF00)) | ((c1 & 0x0000FF) << 16);
c2 = ((c2 & 0xFF0000) >> 16) | ((c2 & 0x00FF00)) | ((c2 & 0x0000FF) << 16);
return c1 - c2;
};
public static void validate(BufferedImage palette, BufferedImage result) {
List<Integer> paletteColors = getTrueColors(palette);
List<Integer> resultColors = getTrueColors(result);
Collections.sort(paletteColors);
Collections.sort(resultColors);
System.out.println(paletteColors.equals(resultColors));
}
public static List<Integer> getTrueColors(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
List<Integer> colors = new ArrayList<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
colors.add(img.getRGB(x, y));
}
}
Collections.sort(colors);
return colors;
}
}
My approach works by finding the closest color to each pixel (well, likely the closest), in 3-space, since colors are 3D.
This works by creating a list of all the points we need to fill and a list of all the possible colors we can use. We randomize the list of points (so the image will turn out better), then we go through each point and get the color of the source image.
Update: I used to simply binary search, so red matched better than green which matched better than blue. I now changed it to do six binary searches (all of the possible permutations), then choose the closest color. It only takes ~6 times as long (ie 1 minute). While the pictures are still grainy, the colors match better.
Update 2: I no longer randomize the list. Instead, I choose 4 points following the rule of thirds, then randomly arrange the points, with preference to filling out the center.
Note: See the revision history for the old pictures.
Mona Lisa -> River:

Mona Lisa -> American Gothic:

Mona Lisa -> Raytraced Spheres:

Starry Night -> Mona Lisa:

Here's an animated Gif showing how the image was constructed:

And showing the pixels being taken from the Mona Lisa:

\$\endgroup\$
20
\$\begingroup\$
Perl, with Lab color space and dithering
Note: Now I have a C solution too.
Uses a similar approach to aditsu's, (choose two random positions, and swap the pixels at those positions if it would make the image more like the target image), with two major improvements:
- Uses the CIE Lab* color space to compare colors — the Euclidean metric on this space is a very good approximation to the perceptual difference between two colors, so the color mappings should be more accurate than RGB or even HSV/HSL.
- After an initial pass putting pixels in the best possible single position, it does an additional pass with a random dither. Instead of comparing the pixel values at the two swap positions, it computes the average pixel value of a 3x3 neighborhood centered at the swap positions. If a swap improves the average colors of the neighborhoods it's allowed, even if it makes individual pixels less accurate. For some image pairs this has a dubious effect on the quality (and makes the palette effect less striking), but for some (like spheres -> anything) it helps quite a bit. The "detail" factor emphasizes the central pixel to a variable degree. Increasing it decreases the overall amount of dither, but retains more fine detail from the target image. The dithered optimization is slower, which is why we run it on the output of the non-dithered optimization as a starting point.
Averaging Lab values, like the dither does, isn't really justified (they should be converted to XYZ, averaged, and converted back) but it works just fine for these purposes.
These images have termination limits of 100 and 200 (end the first phase when less than 1 in 5000 swaps is accepted, and the second phase at 1 in 2500), and a dithering detail factor of 12 (a little tighter dither than the previous set). At this super high quality setting, the images take a long time to generate, but with parallelization the whole job still finishes within an hour on my 6-core box. Bumping the values up to 500 or so finishes images within a few minutes, they just look a little less polished. I wanted to show off the algorithm to the best here.
Code is not by any means pretty:
#!/usr/bin/perl
use strict;
use warnings;
use Image::Magick;
use Graphics::ColorObject 'RGB_to_Lab';
use List::Util qw(sum max);
my $source = Image::Magick->new;
$source->Read($ARGV[0]);
my $target = Image::Magick->new;
$target->Read($ARGV[1]);
my ($limit1, $limit2, $detail) = @ARGV[2,3,4];
my ($width, $height) = ($target->Get('width'), $target->Get('height'));
# Transfer the pixels of the $source onto a new canvas with the diemnsions of $target
$source->Set(magick => 'RGB');
my $img = Image::Magick->new(size => "${width}x${height}", magick => 'RGB', depth => 8);
$img->BlobToImage($source->ImageToBlob);
my ($made, $rejected) = (0,0);
system("rm anim/*.png");
my (@img_lab, @target_lab);
for my $x (0 .. $width) {
for my $y (0 .. $height) {
$img_lab[$x][$y] = RGB_to_Lab([$img->getPixel(x => $x, y => $y)], 'sRGB');
$target_lab[$x][$y] = RGB_to_Lab([$target->getPixel(x => $x, y => $y)], 'sRGB');
}
}
my $n = 0;
my $frame = 0;
my $mode = 1;
while (1) {
$n++;
my $swap = 0;
my ($x1, $x2, $y1, $y2) = (int rand $width, int rand $width, int rand $height, int rand $height);
my ($dist, $dist_swapped);
if ($mode == 1) {
$dist = (sum map { ($img_lab[$x1][$y1][$_] - $target_lab[$x1][$y1][$_])**2 } 0..2)
+ (sum map { ($img_lab[$x2][$y2][$_] - $target_lab[$x2][$y2][$_])**2 } 0..2);
$dist_swapped = (sum map { ($img_lab[$x2][$y2][$_] - $target_lab[$x1][$y1][$_])**2 } 0..2)
+ (sum map { ($img_lab[$x1][$y1][$_] - $target_lab[$x2][$y2][$_])**2 } 0..2);
} else { # dither mode
my $xoffmin = ($x1 == 0 || $x2 == 0 ? 0 : -1);
my $xoffmax = ($x1 == $width - 1 || $x2 == $width - 1 ? 0 : 1);
my $yoffmin = ($y1 == 0 || $y2 == 0 ? 0 : -1);
my $yoffmax = ($y1 == $height - 1 || $y2 == $height - 1 ? 0 : 1);
my (@img1, @img2, @target1, @target2, $points);
for my $xoff ($xoffmin .. $xoffmax) {
for my $yoff ($yoffmin .. $yoffmax) {
$points++;
for my $chan (0 .. 2) {
$img1[$chan] += $img_lab[$x1+$xoff][$y1+$yoff][$chan];
$img2[$chan] += $img_lab[$x2+$xoff][$y2+$yoff][$chan];
$target1[$chan] += $target_lab[$x1+$xoff][$y1+$yoff][$chan];
$target2[$chan] += $target_lab[$x2+$xoff][$y2+$yoff][$chan];
}
}
}
my @img1s = @img1;
my @img2s = @img2;
for my $chan (0 .. 2) {
$img1[$chan] += $img_lab[$x1][$y1][$chan] * ($detail - 1);
$img2[$chan] += $img_lab[$x2][$y2][$chan] * ($detail - 1);
$target1[$chan] += $target_lab[$x1][$y1][$chan] * ($detail - 1);
$target2[$chan] += $target_lab[$x2][$y2][$chan] * ($detail - 1);
$img1s[$chan] += $img_lab[$x2][$y2][$chan] * $detail - $img_lab[$x1][$y1][$chan];
$img2s[$chan] += $img_lab[$x1][$y1][$chan] * $detail - $img_lab[$x2][$y2][$chan];
}
$dist = (sum map { ($img1[$_] - $target1[$_])**2 } 0..2)
+ (sum map { ($img2[$_] - $target2[$_])**2 } 0..2);
$dist_swapped = (sum map { ($img1s[$_] - $target1[$_])**2 } 0..2)
+ (sum map { ($img2s[$_] - $target2[$_])**2 } 0..2);
}
if ($dist_swapped < $dist) {
my @pix1 = $img->GetPixel(x => $x1, y => $y1);
my @pix2 = $img->GetPixel(x => $x2, y => $y2);
$img->SetPixel(x => $x1, y => $y1, color => \@pix2);
$img->SetPixel(x => $x2, y => $y2, color => \@pix1);
($img_lab[$x1][$y1], $img_lab[$x2][$y2]) = ($img_lab[$x2][$y2], $img_lab[$x1][$y1]);
$made ++;
} else {
$rejected ++;
}
if ($n % 50000 == 0) {
# print "Made: $made Rejected: $rejected\n";
$img->Write('png:out.png');
system("cp", "out.png", sprintf("anim/frame%05d.png", $frame++));
if ($mode == 1 and $made < $limit1) {
$mode = 2;
system("cp", "out.png", "nodither.png");
} elsif ($mode == 2 and $made < $limit2) {
last;
}
($made, $rejected) = (0, 0);
}
}
Results
American Gothic palette


Little difference here with dithering or not.
Mona Lisa palette



Dithering reduces the banding on the spheres, but isn't especially pretty.
Starry Night palette



Mona Lisa retains a bit more detail with dithering. Spheres is about the same situation as last time.
Scream palette




Starry Night without dithering is the most awesome thing ever. Dithering makes it more photo-accurate, but far less interesting.
Spheres palette
As aditsu says, the true test. I think I pass.



Dithering helps immensely with American Gothic and Mona Lisa, mixing some grays and other colors in with the more intense pixels to produce semi-accurate skin tones instead of horrible blotches. The Scream is affected far less.
Camaro - Mustang
Source images from flawr's post.
Camaro: 
Mustang: 
Camaro palette

Looks pretty good without dither.

A "tight" dither (same detail factor as above) doesn't change much, just adds a little detail in the highlights on the hood and roof.

A "loose" dither (detail factor dropped to 6) really smooths out the tonality, and a lot more detail is visible through the windshield, but ditherng patterns are more obvious everywhere.
Mustang palette

Parts of the car look great, but the gray pixels look glitchy. What's worse, all the darker yellow pixels got distributed over the red Camaro body, and the non-dithering algorithm can't find anything to do with the lighter ones (moving them into the car would make the match worse, and moving them to another spot on the background makes no net difference), so there's a ghost-Mustang in the background.

Dithering is able to spread those extra yellow pixels around so they don't touch, scattering them more-or-less evenly over the background in the process. The highlights and shadows on the car look a little better.

Again, the loose dither has the evenest tonality, reveals more detail on the headlights and windshield. The car almost looks red again. background is clumpier for some reason. Not sure if I like it.
Video
(HQ Link)
6571 gold badge10 silver badges23 bronze badges
\$\endgroup\$
11
\$\begingroup\$
Python
The idea is simple: Every pixel has a point in the 3D RGB space. The goal is matching each a pixel of the source and one of the destination image, preferably they should be 'close' (represent the 'same' colour). Since they can be distributed in pretty different ways, we cannot just match the nearest neighbour.
Strategy
Let n be an integer (small, 3-255 or so). Now the the pixelcloud in the RGB space gets sorted by the first axis (R). This set of pixel is now partitioned into n partitions. Each of the partitions is now sorted along the second axis (B) which again is sorted an the same way partitioned. We do this with both pictures, and have now for both an array of the points. Now we just can match the pixels by their position in the array, sice a pixel in the same position in each array has a similar position relative to each pixelcloud in the RGB space.
If the distribution of the pixels in the RGB space of the both images similar (means only shifted and/or stretched along the 3 axis) the result will be pretty predictable. If the distributions look completely different, this algorithm will not produce as good results (as seen by the last example) but this is also one of the harder cases I think. What it does not do is using effects of interaction of neighbouring pixels in perception.
Code
Disclaimer: I am an absolute newbie to python.
from PIL import Image
n = 5 #number of partitions per channel.
src_index = 3 #index of source image
dst_index = 2 #index of destination image
images = ["img0.bmp","img1.bmp","img2.bmp","img3.bmp"];
src_handle = Image.open(images[src_index])
dst_handle = Image.open(images[dst_index])
src = src_handle.load()
dst = dst_handle.load()
assert src_handle.size[0]*src_handle.size[1] == dst_handle.size[0]*dst_handle.size[1],"images must be same size"
def makePixelList(img):
l = []
for x in range(img.size[0]):
for y in range(img.size[1]):
l.append((x,y))
return l
lsrc = makePixelList(src_handle)
ldst = makePixelList(dst_handle)
def sortAndDivide(coordlist,pixelimage,channel): #core
global src,dst,n
retlist = []
#sort
coordlist.sort(key=lambda t: pixelimage[t][channel])
#divide
partitionLength = int(len(coordlist)/n)
if partitionLength <= 0:
partitionLength = 1
if channel < 2:
for i in range(0,len(coordlist),partitionLength):
retlist += sortAndDivide(coordlist[i:i+partitionLength],pixelimage,channel+1)
else:
retlist += coordlist
return retlist
print(src[lsrc[0]])
lsrc = sortAndDivide(lsrc,src,0)
ldst = sortAndDivide(ldst,dst,0)
for i in range(len(ldst)):
dst[ldst[i]] = src[lsrc[i]]
dst_handle.save("exchange"+str(src_index)+str(dst_index)+".png")
Result
I think it turned out not bad considering the simple solution. You can of course get better results when fiddling around with the parameter, or first transforming the colors into another color space, or even optimizing the partitioning.
Full Gallery here: https://i.sstatic.net/sYVFZ.jpg
Detailed for River
monalisa > river

people > river

balls > river

starry night > river

the cry > river

balls > MonaLisa, varying n = 2,4,6,...,20
This was the most challenging task I think, far from nice pictures, here a gif (had to be reduced to 256 colours) of the differen parameter values n = 2,4,6,...,20. To me it was suprprising that very low values produced better images (when looking at the face of Mme. Lisa):

Sorry I cannot stop
Which one do you like better? Chevy Camaro or Ford Mustang? Perhaps this technique could be improved and used for colouring bw pictures. Now here: first I cut the cars roughly out from the background by painting it white (in paint, not very professional...) and then used the python program in each direction.
Originals

Recoloured
There are some artifacts, I think because the area of one car was slightly bigger than the other and because my artistic skills are pretty bad=)

\$\endgroup\$
6
\$\begingroup\$
Python - A theoretically optimal solution
I say theoretically optimal because the truly optimal solution is not quite feasible to compute. I start off by describing the theoretical solution, and then explain how I tweaked it to make it computationally feasible in both space and time.
I consider the most optimal solution as the one yielding the lowest total error across all pixels between the old and new images. The error between two pixels is defined as the Euclidian distance between the points in 3D space where each color value (R, G, B) is a coordinate. In practice, because of how humans see things, the optimal solution may very well not be the best looking solution. However, it appears to do fairly well in all cases.
To compute the mapping, I considered this as a minimum weight bipartite matching problem. In other words, there are two sets of nodes: the original pixels and the palette pixels. An edge is created between each pixel across the two sets (but no edges are created within a set). The cost, or weight, of an edge is the Euclidian distance between the two pixels, as described above. The closer two colors are visually, the lower the cost between the pixels.

This creates a cost matrix of size N2. For these images where N = 123520, approximately 40 GB of memory is required to represent the costs as integers, and half that as short integers. Either way, I did not have enough memory on my machine to make an attempt. Another issue is that the Hungarian algorithm, or the Jonker-Volgenant algorithm, which can be used to solve this problem, run in N3 time. While definitely computable, generating a solution per image would have likely taken hours or days.
To get around this issue, I randomly sort both lists of pixels, split the lists into C chunks, run a C++ implementation of the Jonker-Volgenant algorithm on each sublist pair, and then join the lists back to create the final mapping. Therefore, the code below would allow one to find the truly optimal solution provided that they to set the chunk size C to 1 (no chunking) and have enough memory. For these images, I set C to be 16, so that N becomes 7720, taking just a few minutes per image.
A simple way to think of why this works is that randomly sorting the list of pixels and then taking a subset is like sampling the image. So by setting C = 16, it's like taking 16 distinct random samples of size N/C from both the original and palette. Granted, there are probably better ways of splitting the lists, but a random approach provides decent results.
import subprocess
import multiprocessing as mp
import sys
import os
import sge
from random import shuffle
from PIL import Image
import numpy as np
import LAPJV
import pdb
def getError(p1, p2):
return (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2 + (p1[2]-p2[2])**2
def getCostMatrix(pallete_list, source_list):
num_pixels = len(pallete_list)
matrix = np.zeros((num_pixels, num_pixels))
for i in range(num_pixels):
for j in range(num_pixels):
matrix[i][j] = getError(pallete_list[i], source_list[j])
return matrix
def chunks(l, n):
if n < 1:
n = 1
return [l[i:i + n] for i in range(0, len(l), n)]
def imageToColorList(img_file):
i = Image.open(img_file)
pixels = i.load()
width, height = i.size
all_pixels = []
for x in range(width):
for y in range(height):
pixel = pixels[x, y]
all_pixels.append(pixel)
return all_pixels
def colorListToImage(color_list, old_img_file, new_img_file, mapping):
i = Image.open(old_img_file)
pixels = i.load()
width, height = i.size
idx = 0
for x in range(width):
for y in range(height):
pixels[x, y] = color_list[mapping[idx]]
idx += 1
i.save(new_img_file)
def getMapping(pallete_list, source_list):
matrix = getCostMatrix(source_list, pallete_list)
result = LAPJV.lap(matrix)[1]
ret = []
for i in range(len(pallete_list)):
ret.append(result[i])
return ret
def randomizeList(l):
rdm_l = list(l)
shuffle(rdm_l)
return rdm_l
def getPartialMapping(zipped_chunk):
pallete_chunk = zipped_chunk[0]
source_chunk = zipped_chunk[1]
subl_pallete = map(lambda v: v[1], pallete_chunk)
subl_source = map(lambda v: v[1], source_chunk)
mapping = getMapping(subl_pallete, subl_source)
return mapping
def getMappingWithPartitions(pallete_list, source_list, C = 1):
rdm_pallete = randomizeList(enumerate(pallete_list))
rdm_source = randomizeList(enumerate(source_list))
num_pixels = len(rdm_pallete)
real_mapping = [0] * num_pixels
chunk_size = int(num_pixels / C)
chunked_rdm_pallete = chunks(rdm_pallete, chunk_size)
chunked_rdm_source = chunks(rdm_source, chunk_size)
zipped_chunks = zip(chunked_rdm_pallete, chunked_rdm_source)
pool = mp.Pool(2)
mappings = pool.map(getPartialMapping, zipped_chunks)
for mapping, zipped_chunk in zip(mappings, zipped_chunks):
pallete_chunk = zipped_chunk[0]
source_chunk = zipped_chunk[1]
for idx1,idx2 in enumerate(mapping):
src_px = source_chunk[idx1]
pal_px = pallete_chunk[idx2]
real_mapping[src_px[0]] = pal_px[0]
return real_mapping
def run(pallete_name, source_name, output_name):
print("Getting Colors...")
pallete_list = imageToColorList(pallete_name)
source_list = imageToColorList(source_name)
print("Creating Mapping...")
mapping = getMappingWithPartitions(pallete_list, source_list, C = 16)
print("Generating Image...");
colorListToImage(pallete_list, source_name, output_name, mapping)
if __name__ == '__main__':
pallete_name = sys.argv[1]
source_name = sys.argv[2]
output_name = sys.argv[3]
run(pallete_name, source_name, output_name)
Results:
Like with aditsu's solution, these images were all generated using the exact same parameters. The only parameter here is C, which should be set as low as possible. For me, C = 16 was a good balance between speed and quality.
All images: https://i.sstatic.net/AqjYY.jpg
American Gothic palette

Mona Lisa palette

Starry Night palette

Scream palette

River palette

Spheres palette

\$\endgroup\$
4
\$\begingroup\$
C# Winform - Visual Studio 2010
Edit Dithering added.
That's my version of random-swap algorithm - @hobbs flavour. I still feel that some sort of non-random dithering can do better...
Color elaboration in Y-Cb-Cr space (as in jpeg compression)
Two phase elaboration:
- Copy of pixel from source in luminance order. This already gives a good image, but desaturated - almost grey scale - in near 0 time
- Repeated random swap of pixels. The swap is done if this give a better delta (respect to the source) in the 3x3 cell containing the pixel. So it's a dithering effect. The delta is calculated on Y-Cr-Cb space with no weighting of the different components.
This is essentially the same method used by @hobbs, without the first random swap without dithering. Just, my times are shorter (the language counts?) and I think my images are better (probably the color space used is more accurate).
Program usage: put .png images in your c:\temp folder, check element in the list to choose the palette image, select element in the list to choose source image (not so user friendly). Click start button to start elaboration, saving is automatic (even when you prefer not - beware).
Elaboration time under 90 seconds.
Updated Results
Palette: American Gothic

Palette: Monna Lisa

Palette: Rainbow

Palette: River

Palette: Scream

Palette: Starry Night

Form1.cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Drawing.Imaging;
using System.IO;
namespace Palette
{
public struct YRB
{
public int y, cb, cr;
public YRB(int r, int g, int b)
{
y = (int)(0.299 * r + 0.587 * g + 0.114 * b);
cb = (int)(128 - 0.168736 * r - 0.331264 * g + 0.5 * b);
cr = (int)(128 + 0.5 * r - 0.418688 * g - 0.081312 * b);
}
}
public struct Pixel
{
private const int ARGBAlphaShift = 24;
private const int ARGBRedShift = 16;
private const int ARGBGreenShift = 8;
private const int ARGBBlueShift = 0;
public int px, py;
private uint _color;
public YRB yrb;
public Pixel(uint col, int px = 0, int py = 0)
{
this.px = px;
this.py = py;
this._color = col;
yrb = new YRB((int)(col >> ARGBRedShift) & 255, (int)(col >> ARGBGreenShift) & 255, (int)(col >> ARGBBlueShift) & 255);
}
public uint color
{
get {
return _color;
}
set {
_color = color;
yrb = new YRB((int)(color >> ARGBRedShift) & 255, (int)(color >> ARGBGreenShift) & 255, (int)(color >> ARGBBlueShift) & 255);
}
}
public int y
{
get { return yrb.y; }
}
public int cr
{
get { return yrb.cr; }
}
public int cb
{
get { return yrb.cb; }
}
}
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
DirectoryInfo di = new System.IO.DirectoryInfo(@"c:\temp\");
foreach (FileInfo file in di.GetFiles("*.png"))
{
ListViewItem item = new ListViewItem(file.Name);
item.SubItems.Add(file.FullName);
lvFiles.Items.Add(item);
}
}
private void lvFiles_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e)
{
if (e.IsSelected)
{
string file = e.Item.SubItems[1].Text;
GetImagePB(pbSource, file);
pbSource.Tag = file;
DupImage(pbSource, pbOutput);
this.Width = pbOutput.Width + pbOutput.Left + 20;
this.Height = Math.Max(pbOutput.Height, pbPalette.Height)+lvFiles.Height*2;
}
}
private void lvFiles_ItemCheck(object sender, ItemCheckEventArgs e)
{
foreach (ListViewItem item in lvFiles.CheckedItems)
{
if (item.Index != e.Index) item.Checked = false;
}
string file = lvFiles.Items[e.Index].SubItems[1].Text;
GetImagePB(pbPalette, file);
pbPalette.Tag = lvFiles.Items[e.Index].SubItems[0].Text;
this.Width = pbOutput.Width + pbOutput.Left + 20;
this.Height = Math.Max(pbOutput.Height, pbPalette.Height) + lvFiles.Height * 2;
}
Pixel[] Palette;
Pixel[] Source;
private void BtnStart_Click(object sender, EventArgs e)
{
lvFiles.Enabled = false;
btnStart.Visible = false;
progressBar.Visible = true;
DupImage(pbSource, pbOutput);
Work(pbSource.Image as Bitmap, pbPalette.Image as Bitmap, pbOutput.Image as Bitmap);
string newfile = (string)pbSource.Tag +"-"+ (string)pbPalette.Tag;
pbOutput.Image.Save(newfile, ImageFormat.Png);
lvFiles.Enabled = true;
btnStart.Visible = true;
progressBar.Visible = false;
}
private void Work(Bitmap srcb, Bitmap palb, Bitmap outb)
{
GetData(srcb, out Source);
GetData(palb, out Palette);
FastBitmap fout = new FastBitmap(outb);
FastBitmap fsrc = new FastBitmap(srcb);
int pm = Source.Length;
int w = outb.Width;
int h = outb.Height;
progressBar.Maximum = pm;
fout.LockImage();
for (int p = 0; p < pm; p++)
{
fout.SetPixel(Source[p].px, Source[p].py, Palette[p].color);
}
fout.UnlockImage();
pbOutput.Refresh();
var rnd = new Random();
int totsw = 0;
progressBar.Maximum = 200;
for (int i = 0; i < 200; i++)
{
int nsw = 0;
progressBar.Value = i;
fout.LockImage();
fsrc.LockImage();
for (int j = 0; j < 200000; j++)
{
nsw += CheckSwap(fsrc, fout, 1 + rnd.Next(w - 2), 1 + rnd.Next(h - 2), 1 + rnd.Next(w - 2), 1 + rnd.Next(h - 2));
}
totsw += nsw;
lnCurSwap.Text = nsw.ToString();
lnTotSwap.Text = totsw.ToString();
fout.UnlockImage();
fsrc.UnlockImage();
pbOutput.Refresh();
Application.DoEvents();
if (nsw == 0)
{
break;
}
}
}
int CheckSwap(FastBitmap fsrc, FastBitmap fout, int x1, int y1, int x2, int y2)
{
const int fmax = 3;
YRB ov1 = new YRB();
YRB sv1 = new YRB();
YRB ov2 = new YRB();
YRB sv2 = new YRB();
int f;
for (int dx = -1; dx <= 1; dx++)
{
for (int dy = -1; dy <= 1; dy++)
{
f = (fmax - Math.Abs(dx) - Math.Abs(dy));
{
Pixel o1 = new Pixel(fout.GetPixel(x1 + dx, y1 + dy));
ov1.y += o1.y * f;
ov1.cb += o1.cr * f;
ov1.cr += o1.cb * f;
Pixel s1 = new Pixel(fsrc.GetPixel(x1 + dx, y1 + dy));
sv1.y += s1.y * f;
sv1.cb += s1.cr * f;
sv1.cr += s1.cb * f;
Pixel o2 = new Pixel(fout.GetPixel(x2 + dx, y2 + dy));
ov2.y += o2.y * f;
ov2.cb += o2.cr * f;
ov2.cr += o2.cb * f;
Pixel s2 = new Pixel(fsrc.GetPixel(x2 + dx, y2 + dy));
sv2.y += s2.y * f;
sv2.cb += s2.cr * f;
sv2.cr += s2.cb * f;
}
}
}
YRB ox1 = ov1;
YRB ox2 = ov2;
Pixel oc1 = new Pixel(fout.GetPixel(x1, y1));
Pixel oc2 = new Pixel(fout.GetPixel(x2, y2));
ox1.y += fmax * oc2.y - fmax * oc1.y;
ox1.cb += fmax * oc2.cr - fmax * oc1.cr;
ox1.cr += fmax * oc2.cb - fmax * oc1.cb;
ox2.y += fmax * oc1.y - fmax * oc2.y;
ox2.cb += fmax * oc1.cr - fmax * oc2.cr;
ox2.cr += fmax * oc1.cb - fmax * oc2.cb;
int curd = Delta(ov1, sv1, 1) + Delta(ov2, sv2, 1);
int newd = Delta(ox1, sv1, 1) + Delta(ox2, sv2, 1);
if (newd < curd)
{
fout.SetPixel(x1, y1, oc2.color);
fout.SetPixel(x2, y2, oc1.color);
return 1;
}
return 0;
}
int Delta(YRB p1, YRB p2, int sf)
{
int dy = (p1.y - p2.y);
int dr = (p1.cr - p2.cr);
int db = (p1.cb - p2.cb);
return dy * dy * sf + dr * dr + db * db;
}
Bitmap GetData(Bitmap bmp, out Pixel[] Output)
{
FastBitmap fb = new FastBitmap(bmp);
BitmapData bmpData = fb.LockImage();
Output = new Pixel[bmp.Width * bmp.Height];
int p = 0;
for (int y = 0; y < bmp.Height; y++)
{
uint col = fb.GetPixel(0, y);
Output[p++] = new Pixel(col, 0, y);
for (int x = 1; x < bmp.Width; x++)
{
col = fb.GetNextPixel();
Output[p++] = new Pixel(col, x, y);
}
}
fb.UnlockImage(); // Unlock the bits.
Array.Sort(Output, (a, b) => a.y - b.y);
return bmp;
}
void DupImage(PictureBox s, PictureBox d)
{
if (d.Image != null)
d.Image.Dispose();
d.Image = new Bitmap(s.Image.Width, s.Image.Height);
}
void GetImagePB(PictureBox pb, string file)
{
Bitmap bms = new Bitmap(file, false);
Bitmap bmp = bms.Clone(new Rectangle(0, 0, bms.Width, bms.Height), PixelFormat.Format32bppArgb);
bms.Dispose();
if (pb.Image != null)
pb.Image.Dispose();
pb.Image = bmp;
}
}
//Adapted from Visual C# Kicks - http://www.vcskicks.com/
unsafe public class FastBitmap
{
private Bitmap workingBitmap = null;
private int width = 0;
private BitmapData bitmapData = null;
private Byte* pBase = null;
public FastBitmap(Bitmap inputBitmap)
{
workingBitmap = inputBitmap;
}
public BitmapData LockImage()
{
Rectangle bounds = new Rectangle(Point.Empty, workingBitmap.Size);
width = (int)(bounds.Width * 4 + 3) & ~3;
//Lock Image
bitmapData = workingBitmap.LockBits(bounds, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
pBase = (Byte*)bitmapData.Scan0.ToPointer();
return bitmapData;
}
private uint* pixelData = null;
public uint GetPixel(int x, int y)
{
pixelData = (uint*)(pBase + y * width + x * 4);
return *pixelData;
}
public uint GetNextPixel()
{
return *++pixelData;
}
public void GetPixelArray(int x, int y, uint[] Values, int offset, int count)
{
pixelData = (uint*)(pBase + y * width + x * 4);
while (count-- > 0)
{
Values[offset++] = *pixelData++;
}
}
public void SetPixel(int x, int y, uint color)
{
pixelData = (uint*)(pBase + y * width + x * 4);
*pixelData = color;
}
public void SetNextPixel(uint color)
{
*++pixelData = color;
}
public void UnlockImage()
{
workingBitmap.UnlockBits(bitmapData);
bitmapData = null;
pBase = null;
}
}
}
Form1.Designer.cs
namespace Palette
{
partial class Form1
{
/// <summary>
/// Variabile di progettazione necessaria.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Liberare le risorse in uso.
/// </summary>
/// <param name="disposing">ha valore true se le risorse gestite devono essere eliminate, false in caso contrario.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Codice generato da Progettazione Windows Form
/// <summary>
/// Metodo necessario per il supporto della finestra di progettazione. Non modificare
/// il contenuto del metodo con l'editor di codice.
/// </summary>
private void InitializeComponent()
{
this.components = new System.ComponentModel.Container();
this.panel = new System.Windows.Forms.FlowLayoutPanel();
this.pbSource = new System.Windows.Forms.PictureBox();
this.pbPalette = new System.Windows.Forms.PictureBox();
this.pbOutput = new System.Windows.Forms.PictureBox();
this.btnStart = new System.Windows.Forms.Button();
this.progressBar = new System.Windows.Forms.ProgressBar();
this.imageList1 = new System.Windows.Forms.ImageList(this.components);
this.lvFiles = new System.Windows.Forms.ListView();
this.lnTotSwap = new System.Windows.Forms.Label();
this.lnCurSwap = new System.Windows.Forms.Label();
this.panel.SuspendLayout();
((System.ComponentModel.ISupportInitialize)(this.pbSource)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pbPalette)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pbOutput)).BeginInit();
this.SuspendLayout();
//
// panel
//
this.panel.AutoScroll = true;
this.panel.AutoSize = true;
this.panel.Controls.Add(this.pbSource);
this.panel.Controls.Add(this.pbPalette);
this.panel.Controls.Add(this.pbOutput);
this.panel.Dock = System.Windows.Forms.DockStyle.Top;
this.panel.Location = new System.Drawing.Point(0, 0);
this.panel.Name = "panel";
this.panel.Size = new System.Drawing.Size(748, 266);
this.panel.TabIndex = 3;
this.panel.WrapContents = false;
//
// pbSource
//
this.pbSource.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
this.pbSource.Location = new System.Drawing.Point(3, 3);
this.pbSource.Name = "pbSource";
this.pbSource.Size = new System.Drawing.Size(157, 260);
this.pbSource.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
this.pbSource.TabIndex = 1;
this.pbSource.TabStop = false;
//
// pbPalette
//
this.pbPalette.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
this.pbPalette.Location = new System.Drawing.Point(166, 3);
this.pbPalette.Name = "pbPalette";
this.pbPalette.Size = new System.Drawing.Size(172, 260);
this.pbPalette.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
this.pbPalette.TabIndex = 3;
this.pbPalette.TabStop = false;
//
// pbOutput
//
this.pbOutput.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
this.pbOutput.Location = new System.Drawing.Point(344, 3);
this.pbOutput.Name = "pbOutput";
this.pbOutput.Size = new System.Drawing.Size(172, 260);
this.pbOutput.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize;
this.pbOutput.TabIndex = 4;
this.pbOutput.TabStop = false;
//
// btnStart
//
this.btnStart.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
this.btnStart.Location = new System.Drawing.Point(669, 417);
this.btnStart.Name = "btnStart";
this.btnStart.Size = new System.Drawing.Size(79, 42);
this.btnStart.TabIndex = 4;
this.btnStart.Text = "Start";
this.btnStart.UseVisualStyleBackColor = true;
this.btnStart.Click += new System.EventHandler(this.BtnStart_Click);
//
// progressBar
//
this.progressBar.Dock = System.Windows.Forms.DockStyle.Bottom;
this.progressBar.Location = new System.Drawing.Point(0, 465);
this.progressBar.Name = "progressBar";
this.progressBar.Size = new System.Drawing.Size(748, 16);
this.progressBar.TabIndex = 5;
//
// imageList1
//
this.imageList1.ColorDepth = System.Windows.Forms.ColorDepth.Depth8Bit;
this.imageList1.ImageSize = new System.Drawing.Size(16, 16);
this.imageList1.TransparentColor = System.Drawing.Color.Transparent;
//
// lvFiles
//
this.lvFiles.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.lvFiles.CheckBoxes = true;
this.lvFiles.HideSelection = false;
this.lvFiles.Location = new System.Drawing.Point(12, 362);
this.lvFiles.MultiSelect = false;
this.lvFiles.Name = "lvFiles";
this.lvFiles.Size = new System.Drawing.Size(651, 97);
this.lvFiles.Sorting = System.Windows.Forms.SortOrder.Ascending;
this.lvFiles.TabIndex = 7;
this.lvFiles.UseCompatibleStateImageBehavior = false;
this.lvFiles.View = System.Windows.Forms.View.List;
this.lvFiles.ItemCheck += new System.Windows.Forms.ItemCheckEventHandler(this.lvFiles_ItemCheck);
this.lvFiles.ItemSelectionChanged += new System.Windows.Forms.ListViewItemSelectionChangedEventHandler(this.lvFiles_ItemSelectionChanged);
//
// lnTotSwap
//
this.lnTotSwap.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
this.lnTotSwap.Location = new System.Drawing.Point(669, 362);
this.lnTotSwap.Name = "lnTotSwap";
this.lnTotSwap.Size = new System.Drawing.Size(58, 14);
this.lnTotSwap.TabIndex = 8;
this.lnTotSwap.Text = "label1";
//
// lnCurSwap
//
this.lnCurSwap.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
this.lnCurSwap.Location = new System.Drawing.Point(669, 385);
this.lnCurSwap.Name = "lnCurSwap";
this.lnCurSwap.Size = new System.Drawing.Size(58, 14);
this.lnCurSwap.TabIndex = 9;
this.lnCurSwap.Text = "label1";
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.BackColor = System.Drawing.SystemColors.ControlDark;
this.ClientSize = new System.Drawing.Size(748, 481);
this.Controls.Add(this.lnCurSwap);
this.Controls.Add(this.lnTotSwap);
this.Controls.Add(this.lvFiles);
this.Controls.Add(this.progressBar);
this.Controls.Add(this.btnStart);
this.Controls.Add(this.panel);
this.Name = "Form1";
this.Text = "Form1";
this.Load += new System.EventHandler(this.Form1_Load);
this.panel.ResumeLayout(false);
this.panel.PerformLayout();
((System.ComponentModel.ISupportInitialize)(this.pbSource)).EndInit();
((System.ComponentModel.ISupportInitialize)(this.pbPalette)).EndInit();
((System.ComponentModel.ISupportInitialize)(this.pbOutput)).EndInit();
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.FlowLayoutPanel panel;
private System.Windows.Forms.PictureBox pbSource;
private System.Windows.Forms.PictureBox pbPalette;
private System.Windows.Forms.PictureBox pbOutput;
private System.Windows.Forms.Button btnStart;
private System.Windows.Forms.ProgressBar progressBar;
private System.Windows.Forms.ImageList imageList1;
private System.Windows.Forms.ListView lvFiles;
private System.Windows.Forms.Label lnTotSwap;
private System.Windows.Forms.Label lnCurSwap;
}
}
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
namespace Palette
{
static class Program
{
/// <summary>
/// Punto di ingresso principale dell'applicazione.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
}
}
Check 'Unsafe code' in project property to compile.
\$\endgroup\$
3
\$\begingroup\$
Python
Edit: Just realized that you can actually sharpen the source with ImageFilter to make the results more well-defined.
Rainbow -> Mona Lisa (sharpened Mona Lisa source, Luminance only)

Rainbow -> Mona Lisa (non-sharpened source, weighted with Y = 10, I = 10, Q = 0)

Mona Lisa -> American Gothic (non-sharpened source, Luminance only)

Mona Lisa -> American Gothic (non-sharpened source, weighted with Y = 1, I = 10, Q = 1)

River -> Rainbow (non-sharpened source, Luminance only)

Basically, it gets all the pixels from the two pictures into two lists.
Sort them with the luminance as the key. Y in YIQ represents the luminance.
Then, for each pixel in the source (which is in ascending luminance order) get the RGB value from pixel of the same index in the pallete list.
import Image, ImageFilter, colorsys
def getPixels(image):
width, height = image.size
pixels = []
for x in range(width):
for y in range(height):
pixels.append([(x,y), image.getpixel((x,y))])
return pixels
def yiq(pixel):
# y is the luminance
y,i,q = colorsys.rgb_to_yiq(pixel[1][0], pixel[1][6], pixel[1][7])
# Change the weights accordingly to get different results
return 10*y + 0*i + 0*q
# Open the images
source = Image.open('ml.jpg')
pallete = Image.open('rainbow.png')
# Sharpen the source... It won't affect the palette anyway =D
source = source.filter(ImageFilter.SHARPEN)
# Sort the two lists by luminance
sourcePixels = sorted(getPixels(source), key=yiq)
palletePixels = sorted(getPixels(pallete), key=yiq)
copy = Image.new('RGB', source.size)
# Iterate through all the coordinates of source
# And set the new color
index = 0
for sourcePixel in sourcePixels:
copy.putpixel(sourcePixel[0], palletePixels[index][8])
index += 1
# Save the result
copy.save('copy.png')
To keep up with the trend of animations...
Pixels in the scream being quicksorted into starry night and vice-versa

\$\endgroup\$
7
\$\begingroup\$
C, with Lab color space and improved dithering
Did I say I was done? I lied. I think the algorithm in my other solution is the best out there, but Perl just isn't fast enough for number crunching tasks, so I reimplemented my work in C. It now runs all of the images in this post, at a higher quality than the original at about 3 minutes per image, and slightly lower quality (0.5% level) runs in 20-30 seconds per image. Basically all of the work is done with ImageMagick, and the dithering is done using ImageMagick's cubic spline interpolation, which gives a better / less patterned result.
Code
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <unistd.h>
#include <wand/MagickWand.h>
#define ThrowWandException(wand) \
{ \
char \
*description; \
\
ExceptionType \
severity; \
\
description=MagickGetException(wand,&severity); \
(void) fprintf(stderr,"%s %s %lu %s\n",GetMagickModule(),description); \
description=(char *) MagickRelinquishMemory(description); \
abort(); \
exit(-1); \
}
int width, height; /* Target image size */
MagickWand *source_wand, *target_wand, *img_wand, *target_lab_wand, *img_lab_wand;
PixelPacket *source_pixels, *target_pixels, *img_pixels, *target_lab_pixels, *img_lab_pixels;
Image *img, *img_lab, *target, *target_lab;
CacheView *img_lab_view, *target_lab_view;
ExceptionInfo *e;
MagickWand *load_image(const char *filename) {
MagickWand *img = NewMagickWand();
if (!MagickReadImage(img, filename)) {
ThrowWandException(img);
}
return img;
}
PixelPacket *get_pixels(MagickWand *wand) {
PixelPacket *ret = GetAuthenticPixels(
GetImageFromMagickWand(wand), 0, 0,
MagickGetImageWidth(wand), MagickGetImageHeight(wand), e);
CatchException(e);
return ret;
}
void sync_pixels(MagickWand *wand) {
SyncAuthenticPixels(GetImageFromMagickWand(wand), e);
CatchException(e);
}
MagickWand *transfer_pixels() {
if (MagickGetImageWidth(source_wand) * MagickGetImageHeight(source_wand)
!= MagickGetImageWidth(target_wand) * MagickGetImageHeight(target_wand)) {
perror("size mismtch");
}
MagickWand *img_wand = CloneMagickWand(target_wand);
img_pixels = get_pixels(img_wand);
memcpy(img_pixels, source_pixels,
MagickGetImageWidth(img_wand) * MagickGetImageHeight(img_wand) * sizeof(PixelPacket));
sync_pixels(img_wand);
return img_wand;
}
MagickWand *image_to_lab(MagickWand *img) {
MagickWand *lab = CloneMagickWand(img);
TransformImageColorspace(GetImageFromMagickWand(lab), LabColorspace);
return lab;
}
int lab_distance(PixelPacket *a, PixelPacket *b) {
int l_diff = (GetPixelL(a) - GetPixelL(b)) / 256,
a_diff = (GetPixela(a) - GetPixela(b)) / 256,
b_diff = (GetPixelb(a) - GetPixelb(b)) / 256;
return (l_diff * l_diff + a_diff * a_diff + b_diff * b_diff);
}
int should_swap(int x1, int x2, int y1, int y2) {
int dist = lab_distance(&img_lab_pixels[width * y1 + x1], &target_lab_pixels[width * y1 + x1])
+ lab_distance(&img_lab_pixels[width * y2 + x2], &target_lab_pixels[width * y2 + x2]);
int swapped_dist = lab_distance(&img_lab_pixels[width * y2 + x2], &target_lab_pixels[width * y1 + x1])
+ lab_distance(&img_lab_pixels[width * y1 + x1], &target_lab_pixels[width * y2 + x2]);
return swapped_dist < dist;
}
void pixel_multiply_add(MagickPixelPacket *dest, PixelPacket *src, double mult) {
dest->red += (double)GetPixelRed(src) * mult;
dest->green += ((double)GetPixelGreen(src) - 32768) * mult;
dest->blue += ((double)GetPixelBlue(src) - 32768) * mult;
}
#define min(x,y) (((x) < (y)) ? (x) : (y))
#define max(x,y) (((x) > (y)) ? (x) : (y))
double mpp_distance(MagickPixelPacket *a, MagickPixelPacket *b) {
double l_diff = QuantumScale * (a->red - b->red),
a_diff = QuantumScale * (a->green - b->green),
b_diff = QuantumScale * (a->blue - b->blue);
return (l_diff * l_diff + a_diff * a_diff + b_diff * b_diff);
}
void do_swap(PixelPacket *pix, int x1, int x2, int y1, int y2) {
PixelPacket tmp = pix[width * y1 + x1];
pix[width * y1 + x1] = pix[width * y2 + x2];
pix[width * y2 + x2] = tmp;
}
int should_swap_dither(double detail, int x1, int x2, int y1, int y2) {
// const InterpolatePixelMethod method = Average9InterpolatePixel;
const InterpolatePixelMethod method = SplineInterpolatePixel;
MagickPixelPacket img1, img2, img1s, img2s, target1, target2;
GetMagickPixelPacket(img, &img1);
GetMagickPixelPacket(img, &img2);
GetMagickPixelPacket(img, &img1s);
GetMagickPixelPacket(img, &img2s);
GetMagickPixelPacket(target, &target1);
GetMagickPixelPacket(target, &target2);
InterpolateMagickPixelPacket(img, img_lab_view, method, x1, y1, &img1, e);
InterpolateMagickPixelPacket(img, img_lab_view, method, x2, y2, &img2, e);
InterpolateMagickPixelPacket(target, target_lab_view, method, x1, y1, &target1, e);
InterpolateMagickPixelPacket(target, target_lab_view, method, x2, y2, &target2, e);
do_swap(img_lab_pixels, x1, x2, y1, y2);
// sync_pixels(img_wand);
InterpolateMagickPixelPacket(img, img_lab_view, method, x1, y1, &img1s, e);
InterpolateMagickPixelPacket(img, img_lab_view, method, x2, y2, &img2s, e);
do_swap(img_lab_pixels, x1, x2, y1, y2);
// sync_pixels(img_wand);
pixel_multiply_add(&img1, &img_lab_pixels[width * y1 + x1], detail);
pixel_multiply_add(&img2, &img_lab_pixels[width * y2 + x2], detail);
pixel_multiply_add(&img1s, &img_lab_pixels[width * y2 + x2], detail);
pixel_multiply_add(&img2s, &img_lab_pixels[width * y1 + x1], detail);
pixel_multiply_add(&target1, &target_lab_pixels[width * y1 + x1], detail);
pixel_multiply_add(&target2, &target_lab_pixels[width * y2 + x2], detail);
double dist = mpp_distance(&img1, &target1)
+ mpp_distance(&img2, &target2);
double swapped_dist = mpp_distance(&img1s, &target1)
+ mpp_distance(&img2s, &target2);
return swapped_dist + 1.0e-4 < dist;
}
int main(int argc, char *argv[]) {
if (argc != 7) {
fprintf(stderr, "Usage: %s source.png target.png dest nodither_pct dither_pct detail\n", argv[0]);
return 1;
}
char *source_filename = argv[1];
char *target_filename = argv[2];
char *dest = argv[3];
double nodither_pct = atof(argv[4]);
double dither_pct = atof(argv[5]);
double detail = atof(argv[6]) - 1;
const int SWAPS_PER_LOOP = 1000000;
int nodither_limit = ceil(SWAPS_PER_LOOP * nodither_pct / 100);
int dither_limit = ceil(SWAPS_PER_LOOP * dither_pct / 100);
int dither = 0, frame = 0;
char outfile[256], cmdline[1024];
sprintf(outfile, "out/%s.png", dest);
MagickWandGenesis();
e = AcquireExceptionInfo();
source_wand = load_image(source_filename);
source_pixels = get_pixels(source_wand);
target_wand = load_image(target_filename);
target_pixels = get_pixels(target_wand);
img_wand = transfer_pixels();
img_pixels = get_pixels(img_wand);
target_lab_wand = image_to_lab(target_wand);
target_lab_pixels = get_pixels(target_lab_wand);
img_lab_wand = image_to_lab(img_wand);
img_lab_pixels = get_pixels(img_lab_wand);
img = GetImageFromMagickWand(img_lab_wand);
target = GetImageFromMagickWand(target_lab_wand);
img_lab_view = AcquireAuthenticCacheView(img, e);
target_lab_view = AcquireAuthenticCacheView(target,e);
CatchException(e);
width = MagickGetImageWidth(img_wand);
height = MagickGetImageHeight(img_wand);
while (1) {
int swaps_made = 0;
for (int n = 0 ; n < SWAPS_PER_LOOP ; n++) {
int x1 = rand() % width,
x2 = rand() % width,
y1 = rand() % height,
y2 = rand() % height;
int swap = dither ?
should_swap_dither(detail, x1, x2, y1, y2)
: should_swap(x1, x2, y1, y2);
if (swap) {
do_swap(img_pixels, x1, x2, y1, y2);
do_swap(img_lab_pixels, x1, x2, y1, y2);
swaps_made ++;
}
}
sync_pixels(img_wand);
if (!MagickWriteImages(img_wand, outfile, MagickTrue)) {
ThrowWandException(img_wand);
}
img_pixels = get_pixels(img_wand);
sprintf(cmdline, "cp out/%s.png anim/%s/%05i.png", dest, dest, frame++);
system(cmdline);
if (!dither && swaps_made < nodither_limit) {
sprintf(cmdline, "cp out/%s.png out/%s-nodither.png", dest, dest);
system(cmdline);
dither = 1;
} else if (dither && swaps_made < dither_limit)
break;
}
return 0;
}
Compile with
gcc -std=gnu99 -O3 -march=native -ffast-math \
-o transfer `pkg-config --cflags MagickWand` \
transfer.c `pkg-config --libs MagickWand` -lm
Results
Mostly the same as the Perl version, just slightly better, but there are a few exceptions. Dithering is less noticeable in general. Scream -> Starry Night doesn't have the "flaming mountain" effect, and the Camaro looks less glitchy with the gray pixels. I think the Perl version's colorspace code has a bug with low-saturation pixels.
American Gothic palette


Mona Lisa palette



Starry Night palette



Scream palette




Spheres palette



Mustang (Camaro palette)

Camaro (Mustang palette)

14.4k1 gold badge43 silver badges91 bronze badges
\$\endgroup\$
9
\$\begingroup\$
JS
Just run on two image urls.
As a JS package you can run it by yourself in the browser. Provided are fiddles that play around with different settings. Please bear in mind that this fiddle: http://jsfiddle.net/eithe/J7jEk/ will be always up to date (contain all the settings). As this is growing (new options are added), I won't be updating all the previous fiddles.
Calls
f("string to image (palette)", "string to image", {object of options});f([[palette pixel], [palette pixel], ..., "string to image", {object of options});
Options
- algorithm:
'balanced','surrounding','reverse','hsv','yiq','lab' - speed: animation speed
- movement:
true- should the animation show movement from start to end position - surrounding: if
'surrounding'algorithm is selected this is the weight of surrounding that will be taken into account when calculating weight of the given pixel - h s v: if
'hsv'algorithm is selected these parameters control how much hue, saturation and value affect the weights - y i q: if
'qiv'algorithm is selected these parameters control how much y i q affect the weights - l a b: if
'lab'algorithm is selected these parameters control how much l a b affect the weights - noise: how much randomness will be added to weights
- unique: should the pixels from palette be used only once (see: Photomosaics or: How Many Programmers Does it Take to Replace a Light Bulb?)
- pixel_1 / pixel_2 {width,height}: size of the pixel (in pixels :D)
Gallery (for the showcases I'm always using Mona Lisa & American Gothic, unless other specified):
- balanced: http://jsfiddle.net/eithe/J7jEk/

- hsv: http://jsfiddle.net/eithe/J7jEk/27/

- lab: http://jsfiddle.net/eithe/J7jEk/26/

- weighted by surrounding as well: http://jsfiddle.net/eithe/J7jEk/18/

- inspired with OP's gifs, now movement included as well: http://jsfiddle.net/eithe/J7jEk/5/
- because one of the comments got me wondering, what would happen if the palette was defined: http://jsfiddle.net/eithe/J7jEk/6/ (result reminds me of EGA, and does weird things with my eyes while generating)

- every color used once, with 5 gradient step between hues (0,0,0 -> 0,0,5 -> 0,0,10 ->...)

- greyscale (0,0,0 -> ... -> 127,127,127 -> ... -> 255,255,255)

- black & white

- Mona Lisa to Mona Lisa with noise: http://jsfiddle.net/eithe/J7jEk/8/

- Mona Lisa to Mona Lisa inverse: http://jsfiddle.net/eithe/J7jEk/21/

- Rotation: http://jsfiddle.net/eithe/J7jEk/28/
- And for fun - the gif from OP showing how balanced algorithm is broken :D http://jsfiddle.net/eithe/J7jEk/30/, lab though gives good result: http://jsfiddle.net/eithe/J7jEk/31/
- And what LAB does when a palette is specified: http://jsfiddle.net/eithe/J7jEk/32/
- having fun with this: Photomosaics or: How Many Programmers Does it Take to Replace a Light Bulb? to produce this: http://jsfiddle.net/eithe/J7jEk/43/ (pixel: 10x10) (this one eye near where mona lisa's eye is freaks me out)

\$\endgroup\$
8
\$\begingroup\$
HSL nearest value with error propagation and dithering
I've made minor adaptations to the code which I used for my AllRGB images. This is designed to process 16 megapixel images with reasonable time and memory constraints, so it uses some data structure classes which aren't in the standard library; however, I've omitted those because there's already a lot of code here and this is the interesting code.
For AllRGB I manually tune the wavelets which give priority to certain areas of the image; for this unguided usage, I'm picking one wavelet which assumes rule of thirds layout putting the main interest a third of the way down from the top.

My favourite of the 36:

Full Cartesian product of (image, palette)
package org.cheddarmonk.graphics;
import org.cheddarmonk.util.*;
import java.awt.Point;
import java.awt.image.*;
import java.io.File;
import java.util.Random;
import javax.imageio.ImageIO;
public class PaletteApproximator {
public static void main(String[] args) throws Exception {
// Adjust this to fine-tune for the areas which are most important.
float[] waveletDefault = new float[] {0.5f, 0.333f, 0.5f, 0.5f, 1};
generateAndSave(args[0], args[1], args[2], waveletDefault);
}
private static void generateAndSave(String paletteFile, String fileIn, String fileOut, float[]... wavelets) throws Exception {
BufferedImage imgIn = ImageIO.read(new File(fileIn));
int w = imgIn.getWidth(), h = imgIn.getHeight();
int[] buf = new int[w * h];
imgIn.getRGB(0, 0, w, h, buf, 0, w);
SimpleOctTreeInt palette = loadPalette(paletteFile);
generate(palette, buf, w, h, wavelets);
// Masks for R, G, B, A.
final int[] off = new int[]{0xff0000, 0xff00, 0xff, 0xff000000};
// The corresponding colour model.
ColorModel colourModel = ColorModel.getRGBdefault();
DataBufferInt dbi = new DataBufferInt(buf, buf.length);
Point origin = new Point(0, 0);
WritableRaster raster = Raster.createPackedRaster(dbi, w, h, w, off, origin);
BufferedImage imgOut = new BufferedImage(colourModel, raster, false, null);
ImageIO.write(imgOut, "PNG", new File(fileOut));
}
private static SimpleOctTreeInt loadPalette(String paletteFile) throws Exception {
BufferedImage img = ImageIO.read(new File(paletteFile));
int w = img.getWidth(), h = img.getHeight();
int[] buf = new int[w * h];
img.getRGB(0, 0, w, h, buf, 0, w);
// Parameters tuned for 4096x4096
SimpleOctTreeInt octtree = new SimpleOctTreeInt(0, 1, 0, 1, 0, 1, 16, 12);
for (int i = 0; i < buf.length; i++) {
octtree.add(buf[i], transform(buf[i]));
}
return octtree;
}
private static void generate(SimpleOctTreeInt octtree, int[] buf, int w, int h, float[]... wavelets) {
int m = w * h;
LeanBinaryHeapInt indices = new LeanBinaryHeapInt();
Random rnd = new Random();
for (int i = 0; i < m; i++) {
float x = (i % w) / (float)w, y = (i / w) / (float)w;
float weight = 0;
for (float[] wavelet : wavelets) {
weight += wavelet[4] * Math.exp(-Math.pow((x - wavelet[0]) / wavelet[2], 2) - Math.pow((y - wavelet[1]) / wavelet[3], 2));
}
// Random element provides some kind of dither
indices.insert(i, -weight + 0.2f * rnd.nextFloat());
}
// Error diffusion buffers.
float[] errx = new float[m], erry = new float[m], errz = new float[m];
for (int i = 0; i < m; i++) {
int idx = indices.pop();
int x = idx % w, y = idx / w;
// TODO Bicubic interpolation? For the time being, prefer to scale the input image externally...
float[] tr = transform(buf[x + w * y]);
tr[0] += errx[idx]; tr[1] += erry[idx]; tr[2] += errz[idx];
int pixel = octtree.nearestNeighbour(tr, 2);
buf[x + y * w] = 0xff000000 | pixel;
// Don't reuse pixels.
float[] trPix = transform(pixel);
boolean ok = octtree.remove(pixel, trPix);
if (!ok) throw new IllegalStateException("Failed to remove from octtree");
// Propagate error in 4 directions, not caring whether or not we've already done that pixel.
// This will lose some error, but that might be a good thing.
float dx = (tr[0] - trPix[0]) / 4, dy = (tr[1] - trPix[1]) / 4, dz = (tr[2] - trPix[2]) / 4;
if (x > 0) {
errx[idx - 1] += dx;
erry[idx - 1] += dy;
errz[idx - 1] += dz;
}
if (x < w - 1) {
errx[idx + 1] += dx;
erry[idx + 1] += dy;
errz[idx + 1] += dz;
}
if (y > 0) {
errx[idx - w] += dx;
erry[idx - w] += dy;
errz[idx - w] += dz;
}
if (y < h - 1) {
errx[idx + w] += dx;
erry[idx + w] += dy;
errz[idx + w] += dz;
}
}
}
private static final float COS30 = (float)Math.sqrt(3) / 2;
private static float[] transform(int rgb) {
float r = ((rgb >> 16) & 0xff) / 255.f;
float g = ((rgb >> 8) & 0xff) / 255.f;
float b = (rgb & 0xff) / 255.f;
// HSL cone coords
float cmax = (r > g) ? r : g; if (b > cmax) cmax = b;
float cmin = (r < g) ? r : g; if (b < cmin) cmin = b;
float[] cone = new float[3];
cone[0] = (cmax + cmin) / 2;
cone[1] = 0.5f * (1 + r - (g + b) / 2);
cone[2] = 0.5f * (1 + (g - b) * COS30);
return cone;
}
}
\$\endgroup\$
\$\begingroup\$
Python
Not pretty codewise, nor by results.
from blist import blist
from PIL import Image
import random
def randpop(colors):
j = random.randrange(len(colors))
return colors.pop(j)
colors = blist(Image.open('in1.png').getdata())
random.shuffle(colors)
target = Image.open('in2.png')
out = target.copy()
data = list(list(i) for i in out.getdata())
assert len(data) == len(colors)
w, h = out.size
coords = []
for i in xrange(h):
for j in xrange(w):
coords.append((i, j))
# Adjust color balance
dsum = [sum(d[i] for d in data) for i in xrange(3)]
csum = [sum(c[i] for c in colors) for i in xrange(3)]
adjust = [(csum[i] - dsum[i]) // len(data) for i in xrange(3)]
for i, j in coords:
for k in xrange(3):
data[i*w + j][k] += adjust[k]
random.shuffle(coords)
# larger value here gives better results but take longer
choose = 100
threshold = 10
done = set()
while len(coords):
if not len(coords) % 1000:
print len(coords) // 1000
i, j = coords.pop()
ind = i*w + j
done.add(ind)
t = data[ind]
dmin = 255*3
kmin = 0
choices = []
while colors and len(choices) < choose:
k = len(choices)
choices.append(randpop(colors))
c = choices[-1]
d = sum(abs(t[l] - c[l]) for l in xrange(3))
if d < dmin:
dmin = d
kmin = k
if d < threshold:
break
c = choices.pop(kmin)
data[ind] = c
colors.extend(choices)
# Push the error to nearby pixels for dithering
if ind + 1 < len(data) and ind + 1 not in done:
ind2 = ind + 1
elif ind + w < len(data) and ind + w not in done:
ind2 = ind + w
elif ind > 0 and ind - 1 not in done:
ind2 = ind - 1
elif ind - w > 0 and ind - w not in done:
ind2 = ind - w
else:
ind2 = None
if ind2 is not None:
for k in xrange(3):
err = abs(t[k] - c[k])
data[ind2][k] += err
out.putdata(data)
out.save('out.png')
Possible improvements:
- smarter color correction?
- better quality metric?
- push the error to all surrounding pixels rather than one
Ugly (1->2):

A bit better (2->1):

Decent (2->3):

Like a bad raytracer (3->4):

Cheating – use all the good pixels on the upper half and claim paint ran out:

\$\endgroup\$
1
\$\begingroup\$
Python (using kd-tree and luminosity)
Nice challenge. I decided to go with a kd-tree approach. So the basic idea behind using a kd-tree approach is that it divides the colors and the luminosity according to their presence in the picture.
So for the kd-tree the first sort is based on the red. It splits all the colors in to two approximately equal groups of reds (light red and dark red). Next it splits these two partitions along the greens. Next blue and then luminosity and then red again. And so on until the tree has been built. In this approach I built a kd-tree for the source image and the destination image. After that I mapped the tree from the source to the destination and overwrite the color data of the destination file. All of the results are shown here.
Some examples:
Mona Lisa -> American Gothic

American Gothic -> Mona Lisa

Starry Night --> The Scream

The Scream --> Starry Night

Rainbow spheres

Here is short movie using @Calvin's Hobbies movie frame maker:

And now for the code :-)
from PIL import Image
""" Computation of hue, saturation, luminosity.
Based on http://stackoverflow.com/questions/3732046/how-do-you-get-the-hue-of-a-xxxxxx-colour
"""
def rgbToLsh(t):
r = t[0]
g = t[1]
b = t[2]
r /= 255.
g /= 255.
b /= 255.
vmax = max([r, g, b])
vmin = min([r, g, b]);
h = s = l = (vmax + vmin) / 2.;
if (vmax == vmin):
h = s = 0. # achromatic
else:
d = vmax - vmin;
if l > 0.5:
s = d / (2. - vmax - vmin)
else:
s = d / (vmax + vmin);
if vmax == r:
if g<b:
m = 6.
else:
m = 0.
h = (g - b) / d + m
elif vmax == g:
h = (b - r) / d + 2.
elif vmax == b:
h = (r - g) / d + 4.
h /= 6.;
return [l,s,h];
""" KDTree implementation.
Based on https://code.google.com/p/python-kdtree/
"""
__version__ = "1r11.1.2010"
__all__ = ["KDTree"]
def square_distance(pointA, pointB):
# squared euclidean distance
distance = 0
dimensions = len(pointA) # assumes both points have the same dimensions
for dimension in range(dimensions):
distance += (pointA[dimension] - pointB[dimension])**2
return distance
class KDTreeNode():
def __init__(self, point, left, right):
self.point = point
self.left = left
self.right = right
def is_leaf(self):
return (self.left == None and self.right == None)
class KDTreeNeighbours():
""" Internal structure used in nearest-neighbours search.
"""
def __init__(self, query_point, t):
self.query_point = query_point
self.t = t # neighbours wanted
self.largest_distance = 0 # squared
self.current_best = []
def calculate_largest(self):
if self.t >= len(self.current_best):
self.largest_distance = self.current_best[-1][1]
else:
self.largest_distance = self.current_best[self.t-1][1]
def add(self, point):
sd = square_distance(point, self.query_point)
# run through current_best, try to find appropriate place
for i, e in enumerate(self.current_best):
if i == self.t:
return # enough neighbours, this one is farther, let's forget it
if e[1] > sd:
self.current_best.insert(i, [point, sd])
self.calculate_largest()
return
# append it to the end otherwise
self.current_best.append([point, sd])
self.calculate_largest()
def get_best(self):
return [element[0] for element in self.current_best[:self.t]]
class KDTree():
""" KDTree implementation.
Example usage:
from kdtree import KDTree
data = <load data> # iterable of points (which are also iterable, same length)
point = <the point of which neighbours we're looking for>
tree = KDTree.construct_from_data(data)
nearest = tree.query(point, t=4) # find nearest 4 points
"""
def __init__(self, data):
self.data_listing = []
def build_kdtree(point_list, depth):
# code based on wikipedia article: http://en.wikipedia.org/wiki/Kd-tree
if not point_list:
return None
# select axis based on depth so that axis cycles through all valid values
axis = depth % 4 #len(point_list[0]) # assumes all points have the same dimension
# sort point list and choose median as pivot point,
# TODO: better selection method, linear-time selection, distribution
point_list.sort(key=lambda point: point[axis])
median = len(point_list)/2 # choose median
# create node and recursively construct subtrees
node = KDTreeNode(point=point_list[median],
left=build_kdtree(point_list[0:median], depth+1),
right=build_kdtree(point_list[median+1:], depth+1))
# add point to listing
self.data_listing.append(point_list[median])
return node
self.root_node = build_kdtree(data, depth=0)
@staticmethod
def construct_from_data(data):
tree = KDTree(data)
return tree
def query(self, query_point, t=1):
statistics = {'nodes_visited': 0, 'far_search': 0, 'leafs_reached': 0}
def nn_search(node, query_point, t, depth, best_neighbours):
if node == None:
return
#statistics['nodes_visited'] += 1
# if we have reached a leaf, let's add to current best neighbours,
# (if it's better than the worst one or if there is not enough neighbours)
if node.is_leaf():
#statistics['leafs_reached'] += 1
best_neighbours.add(node.point)
return
# this node is no leaf
# select dimension for comparison (based on current depth)
axis = depth % len(query_point)
# figure out which subtree to search
near_subtree = None # near subtree
far_subtree = None # far subtree (perhaps we'll have to traverse it as well)
# compare query_point and point of current node in selected dimension
# and figure out which subtree is farther than the other
if query_point[axis] < node.point[axis]:
near_subtree = node.left
far_subtree = node.right
else:
near_subtree = node.right
far_subtree = node.left
# recursively search through the tree until a leaf is found
nn_search(near_subtree, query_point, t, depth+1, best_neighbours)
# while unwinding the recursion, check if the current node
# is closer to query point than the current best,
# also, until t points have been found, search radius is infinity
best_neighbours.add(node.point)
# check whether there could be any points on the other side of the
# splitting plane that are closer to the query point than the current best
if (node.point[axis] - query_point[axis])**2 < best_neighbours.largest_distance:
#statistics['far_search'] += 1
nn_search(far_subtree, query_point, t, depth+1, best_neighbours)
return
# if there's no tree, there's no neighbors
if self.root_node != None:
neighbours = KDTreeNeighbours(query_point, t)
nn_search(self.root_node, query_point, t, depth=0, best_neighbours=neighbours)
result = neighbours.get_best()
else:
result = []
#print statistics
return result
#List of files:
files = ['JXgho.png','N6IGO.png','c5jq1.png','itzIe.png','xPAwA.png','y2VZJ.png']
#Loop over source files
for im_orig in range(len(files)):
srch = Image.open(files[im_orig]) #Open file handle
src = srch.load(); #Load file
# Build data structure (R,G,B,lum,xpos,ypos) for source file
srcdata = [(src[i,j][0],src[i,j][1],src[i,j][2],rgbToLsh(src[i,j])[0],i,j) \
for i in range(srch.size[0]) \
for j in range(srch.size[1])]
# Build kd-tree for source
srctree = KDTree.construct_from_data(srcdata)
for im in range(len(files)):
desh = Image.open(files[im])
des = desh.load();
# Build data structure (R,G,B,lum,xpos,ypos) for destination file
desdata = [(des[i,j][0],des[i,j][1],des[i,j][2],rgbToLsh(des[i,j]),i,j) \
for i in range(desh.size[0]) \
for j in range(desh.size[1])]
# Build kd-tree for destination
destree = KDTree.construct_from_data(desdata)
# Switch file mode
desh.mode = srch.mode
for k in range(len(srcdata)):
# Get locations from kd-tree sorted data
i = destree.data_listing[k][-2]
j = destree.data_listing[k][-1]
i_s = srctree.data_listing[k][-2]
j_s = srctree.data_listing[k][-1]
# Overwrite original colors with colors from source file
des[i,j] = src[i_s,j_s]
# Save to disk
desh.save(files[im_orig].replace('.','_'+`im`+'.'))
\$\endgroup\$
1
\$\begingroup\$
Python
Just to keep the ball rolling, here's my own simple and painfully slow answer.
import Image
def countColors(image):
colorCounts = {}
for color in image.getdata():
if color in colorCounts:
colorCounts[color] += 1
else:
colorCounts[color] = 1
return colorCounts
def colorDist(c1, c2):
def ds(c1, c2, i):
return (c1[i] - c2[i])**2
return (ds(c1, c2, 0) + ds(c1, c2, 1) + ds(c1, c2, 2))**0.5
def findClosestColor(palette, color):
closest = None
minDist = (3*255**2)**0.5
for c in palette:
dist = colorDist(color, c)
if dist < minDist:
minDist = dist
closest = c
return closest
def removeColor(palette, color):
if palette[color] == 1:
del palette[color]
else:
palette[color] -= 1
def go(paletteFile, sourceFile):
palette = countColors(Image.open(paletteFile).convert('RGB'))
source = Image.open(sourceFile).convert('RGB')
copy = Image.new('RGB', source.size)
w, h = copy.size
for x in range(w):
for y in range(h):
c = findClosestColor(palette, source.getpixel((x, y)))
removeColor(palette, c)
copy.putpixel((x, y), c)
print x #print progress
copy.save('copy.png')
#the respective file paths go here
go('../ag.png', '../r.png')
For each pixel in the source it looks for the unused pixel in the palette that is closest in the RGB color cube. It's basically the same as Quincunx's algorithm but with no randomness and a different color comparison function.
You can tell I move from left to right since the right side of the image has far less detail due to the depletion of similar colors.
River from American Gothic

Mona Lisa from Rainbow Spheres

\$\endgroup\$
2
\$\begingroup\$
Haskell
I tried a few different approaches using nearest neighbor searches before settling on this solution (which was actually my first idea). I first convert the images' pixel formats to YCbCr and construct two lists containing their pixel data. Then, the lists are sorted giving precedence to luminance value. After that, I just replace the input image's sorted pixel list with the palette image's, and then resort it back to the original order and use it to construct a new image.
module Main where
import System.Environment (getArgs)
import System.Exit (exitSuccess, exitFailure)
import System.Console.GetOpt (getOpt, ArgOrder(..), OptDescr(..), ArgDescr(..))
import Data.List (sortBy)
import Codec.Picture
import Codec.Picture.Types
import qualified Data.Vector as V
main :: IO ()
main = do
(ioOpts, _) <- getArgs >>= getOpts
opts <- ioOpts
image <- loadImage $ imageFile opts
palette <- loadImage $ paletteFile opts
case swapPalette image palette of
Nothing -> do
putStrLn "Error: image and palette dimensions do not match"
exitFailure
Just img ->
writePng (outputFile opts) img
swapPalette :: Image PixelYCbCr8 -> Image PixelYCbCr8 -> Maybe (Image PixelRGB8)
swapPalette img pal
| area1 == area2 =
let cmpCr (_, (PixelYCbCr8 _ _ r1)) (_, (PixelYCbCr8 _ _ r2)) = r1 `compare` r2
cmpCb (_, (PixelYCbCr8 _ c1 _)) (_, (PixelYCbCr8 _ c2 _)) = c1 `compare` c2
cmpY (_, (PixelYCbCr8 y1 _ _)) (_, (PixelYCbCr8 y2 _ _)) = y2 `compare` y1
w = imageWidth img
h = imageHeight img
imgData = sortBy cmpY $ sortBy cmpCr $ sortBy cmpCb $ zip [1 :: Int ..] $ getPixelList img
palData = sortBy cmpY $ sortBy cmpCr $ sortBy cmpCb $ zip [1 :: Int ..] $ getPixelList pal
newData = zipWith (\(n, _) (_, p) -> (n, p)) imgData palData
pixData = map snd $ sortBy (\(n1, _) (n2, _) -> n1 `compare` n2) newData
dataVec = V.reverse $ V.fromList pixData
in Just $ convertImage $ generateImage (lookupPixel dataVec w h) w h
| otherwise = Nothing
where area1 = (imageWidth img) * (imageHeight img)
area2 = (imageWidth pal) * (imageHeight pal)
lookupPixel :: V.Vector PixelYCbCr8 -> Int -> Int -> Int -> Int -> PixelYCbCr8
lookupPixel vec w h x y = vec V.! i
where i = flattenIndex w h x y
getPixelList :: Image PixelYCbCr8 -> [PixelYCbCr8]
getPixelList img = foldl (\ps (x, y) -> (pixelAt img x y):ps) [] coords
where coords = [(x, y) | x <- [0..(imageWidth img) - 1], y <- [0..(imageHeight img) - 1]]
flattenIndex :: Int -> Int -> Int -> Int -> Int
flattenIndex _ h x y = y + (x * h)
-------------------------------------------------
-- Command Line Option Functions
-------------------------------------------------
getOpts :: [String] -> IO (IO Options, [String])
getOpts args = case getOpt Permute options args of
(opts, nonOpts, []) -> return (foldl (>>=) (return defaultOptions) opts, nonOpts)
(_, _, errs) -> do
putStrLn $ concat errs
printUsage
exitFailure
data Options = Options
{ imageFile :: Maybe FilePath
, paletteFile :: Maybe FilePath
, outputFile :: FilePath
}
defaultOptions :: Options
defaultOptions = Options
{ imageFile = Nothing
, paletteFile = Nothing
, outputFile = "out.png"
}
options :: [OptDescr (Options -> IO Options)]
options = [ Option ['i'] ["image"] (ReqArg setImage "FILE") "",
Option ['p'] ["palette"] (ReqArg setPalette "FILE") "",
Option ['o'] ["output"] (ReqArg setOutput "FILE") "",
Option ['v'] ["version"] (NoArg showVersion) "",
Option ['h'] ["help"] (NoArg exitPrintUsage) ""]
setImage :: String -> Options -> IO Options
setImage image opts = return $ opts { imageFile = Just image }
setPalette :: String -> Options -> IO Options
setPalette palette opts = return $ opts { paletteFile = Just palette }
setOutput :: String -> Options -> IO Options
setOutput output opts = return $ opts { outputFile = output }
printUsage :: IO ()
printUsage = do
putStrLn "Usage: repix [OPTION...] -i IMAGE -p PALETTE [-o OUTPUT]"
putStrLn "Rearrange pixels in the palette file to closely resemble the given image."
putStrLn ""
putStrLn "-i, --image specify the image to transform"
putStrLn "-p, --palette specify the image to use as the palette"
putStrLn "-o, --output specify the output image file"
putStrLn ""
putStrLn "-v, --version display version information and exit"
putStrLn "-h, --help display this help and exit"
exitPrintUsage :: a -> IO Options
exitPrintUsage _ = do
printUsage
exitSuccess
showVersion :: a -> IO Options
showVersion _ = do
putStrLn "Pixel Rearranger v0.1"
exitSuccess
-------------------------------------------------
-- Image Loading Util Functions
-------------------------------------------------
loadImage :: Maybe FilePath -> IO (Image PixelYCbCr8)
loadImage Nothing = do
printUsage
exitFailure
loadImage (Just path) = do
rdImg <- readImage path
case rdImg of
Left err -> do
putStrLn err
exitFailure
Right img -> getRGBImage img
getRGBImage :: DynamicImage -> IO (Image PixelYCbCr8)
getRGBImage dynImg =
case dynImg of
ImageYCbCr8 img -> return img
ImageRGB8 img -> return $ convertImage img
ImageY8 img -> return $ convertImage (promoteImage img :: Image PixelRGB8)
ImageYA8 img -> return $ convertImage (promoteImage img :: Image PixelRGB8)
ImageCMYK8 img -> return $ convertImage (convertImage img :: Image PixelRGB8)
ImageRGBA8 img -> return $ convertImage (pixelMap dropTransparency img :: Image PixelRGB8)
_ -> do
putStrLn "Error: incompatible image type."
exitFailure
Results
The images my program produces tend to be less vivid than many of the other solutions, and it doesn't deal with large solid areas or gradients well.
Here is a link to the full album.
American Gothic -> Mona Lisa

Mona Lisa -> American Gothic

Spheres -> Mona Lisa

The Scream -> Starry Night

The Scream -> Spheres

\$\endgroup\$
2
\$\begingroup\$
Java
Inspired by the previous java answer from Quincunx
package paletteswap;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.BitSet;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import javax.imageio.ImageIO;
public class Test
{
public static class Bits
{
public static BitSet convert( int value )
{
BitSet bits = new BitSet();
int index = 0;
while ( value != 0L )
{
if ( value % 2 != 0 )
{
bits.set( index );
}
++index;
value = value >>> 1;
}
return bits;
}
public static int convert( BitSet bits )
{
int value = 0;
for ( int i = 0; i < bits.length(); ++i )
{
value += bits.get( i ) ? ( 1 << i ) : 0;
}
return value;
}
}
public static void main( String[] args ) throws IOException
{
BufferedImage source = ImageIO.read( resource( "river.png" ) ); // My names
// for the
// files
BufferedImage palette = ImageIO.read( resource( "farmer.png" ) );
BufferedImage result = rearrange( source, palette );
ImageIO.write( result, "png", resource( "result.png" ) );
}
public static BufferedImage rearrange( BufferedImage source, BufferedImage palette )
{
BufferedImage result = new BufferedImage( source.getWidth(), source.getHeight(), BufferedImage.TYPE_INT_RGB );
// This creates a list of points in the Source image.
// Then, we shuffle it and will draw points in that order.
List<Point> samples = getPoints( source.getWidth(), source.getHeight() );
Collections.sort( samples, new Comparator<Point>()
{
@Override
public int compare( Point o1, Point o2 )
{
int c1 = getRGB( source, o1.x, o1.y );
int c2 = getRGB( source, o2.x, o2.y );
return c1 -c2;
}
} );
// Create a list of colors in the palette.
List<Integer> colors = getColors( palette );
while ( !samples.isEmpty() )
{
Point currentPoint = samples.remove( 0 );
int sourceAtPoint = getRGB( source, currentPoint.x, currentPoint.y );
int colorIndex = binarySearch( colors, sourceAtPoint );
int bestColor = colors.remove( colorIndex );
setRGB( result, currentPoint.x, currentPoint.y, bestColor );
}
return result;
}
public static int unpack( int rgbPacked )
{
BitSet packed = Bits.convert( rgbPacked );
BitSet rgb = Bits.convert( 0 );
for (int i=0; i<8; i++)
{
rgb.set( i, packed.get( i*3 ) );
rgb.set( i+16, packed.get( i*3+1 ) );
rgb.set( i+8, packed.get( i*3+2 ) );
}
return Bits.convert( rgb);
}
public static int pack( int rgb )
{
int myrgb = rgb & 0x00FFFFFF;
BitSet bits = Bits.convert( myrgb );
BitSet packed = Bits.convert( 0 );
for (int i=0; i<8; i++)
{
packed.set( i*3, bits.get( i ) );
packed.set( i*3+1, bits.get( i+16 ) );
packed.set( i*3+2, bits.get( i+8 ) );
}
return Bits.convert( packed);
}
public static int getRGB( BufferedImage image, int x, int y )
{
return pack( image.getRGB( x, y ) );
}
public static void setRGB( BufferedImage image, int x, int y, int color )
{
image.setRGB( x, y, unpack( color ) );
}
public static List<Point> getPoints( int width, int height )
{
List<Point> points = new ArrayList<>( width * height );
for ( int x = 0; x < width; x++ )
{
for ( int y = 0; y < height; y++ )
{
points.add( new Point( x, y ) );
}
}
return points;
}
public static List<Integer> getColors( BufferedImage img )
{
int width = img.getWidth();
int height = img.getHeight();
List<Integer> colors = new ArrayList<>( width * height );
for ( int x = 0; x < width; x++ )
{
for ( int y = 0; y < height; y++ )
{
colors.add( getRGB( img, x, y ) );
}
}
Collections.sort( colors );
return colors;
}
public static int binarySearch( List<Integer> toSearch, int obj )
{
int index = toSearch.size() >> 1;
for ( int guessChange = toSearch.size() >> 2; guessChange > 0; guessChange >>= 1 )
{
int value = toSearch.get( index );
if ( obj == value )
{
return index;
}
else if ( obj < value )
{
index -= guessChange;
}
else
{
index += guessChange;
}
}
return index;
}
public static File resource( String fileName )
{ // This method is here solely
// for my ease of use (I put
// the files under <Project
// Name>/Resources/ )
return new File( System.getProperty( "user.home" ) + "/pictureswap/" + fileName );
}
}
Mona lisa -> Farmers

What is does it sorts the points that need to be replaced by they intensity, instead of random.
\$\endgroup\$
\$\begingroup\$
Ruby
Overview:
Really simple approach, but seems to get pretty good results:
- Take the palette and target, sort their pixels both by some function; call these the "reference" arrays. I've chosen to sort by HSLA, but preferring Luminance to Saturation to Hue (aka "LSHA")
- Make the output image by iterating over each pixel of the target image, finding where it gets sorted to in the target reference array, and taking the pixel from the palette which got sorted to the same index in the palette reference array.
Code:
require 'rubygems'
require 'chunky_png'
require 'rmagick' # just for the rgba => hsla converter, feel free to use something lighter-weight you have on hand
def pixel_array_for_image(image)
# [r, b, g, a]
image.pixels.map{|p| ChunkyPNG::Color.to_truecolor_alpha_bytes(p)}
end
def sorted_pixel_references(pixel_array)
pixel_array.map{|a| yield(a)}.map.with_index.sort_by(&:first).map(&:last)
end
def sort_by_lsha(pixel_array)
sorted_pixel_references(pixel_array) {|p|
# feel free to drop in any sorting function you want here!
hsla = Magick::Pixel.new(*p).to_hsla # [h, s, l, a]
[hsla[2], hsla[1], hsla[0], hsla[3]]
}
end
def make_target_out_of_palette(target_filename, palette_filename, output_filename)
puts "making #{target_filename} out of #{palette_filename}"
palette = ChunkyPNG::Image.from_file(palette_filename)
target = ChunkyPNG::Image.from_file(target_filename)
puts " loaded images"
palette_array = pixel_array_for_image(palette)
target_array = pixel_array_for_image(target)
puts " have pixel arrays"
palette_spr = sort_by_lsha(palette_array)
target_spr = sort_by_lsha(target_array)
puts " have sorted-pixel reference arrays"
output = ChunkyPNG::Image.new(target.dimension.width, target.dimension.height, ChunkyPNG::Color::TRANSPARENT)
(0...target_array.count).each { |index|
spr_index = target_spr.index(index)
index_in_palette = palette_spr[spr_index]
palette_pixel = palette_array[index_in_palette]
index_as_x = (index % target.dimension.width)
index_as_y = (index / target.dimension.width)
output[index_as_x, index_as_y] = ChunkyPNG::Color.rgba(*palette_pixel)
}
output.save(output_filename)
puts " saved to #{output_filename}"
end
palette_filename, target_filename, output_filename = ARGV
make_target_out_of_palette(target_filename, palette_filename, output_filename)
Results:
https://i.sstatic.net/Wo63Q.jpg
Highlights:

\$\endgroup\$
1
\$\begingroup\$
Perl
Here is a rather simplistic approach. It takes about five seconds to generate 100 frames per image pair on my MacBook Pro with a memory footprint of about 120 MB.
The idea is to sort the pixels in both and palette images by 24-bit packed RGB, and replace colors in the source with colors from the palette sequentially.
#!/usr/bin/env perl
use 5.020; # just because
use strict;
use warnings;
use Const::Fast;
use GD;
GD::Image->trueColor(1);
use Path::Class;
const my $COLOR => 0;
const my $COORDINATES => 1;
const my $RGB => 2;
const my $ANIMATION_FRAMES => 100;
const my %MASK => (
RED => 0x00ff0000,
GREEN => 0x0000ff00,
BLUE => 0x000000ff,
);
run(@ARGV);
sub run {
unless (@_ == 2) {
die "Need source and palette images\n";
}
my $source_file = file(shift)->resolve;
my $palette_file = file(shift)->resolve;
my $source = GD::Image->new("$source_file")
or die "Failed to create source image from '$source_file'";
my $palette = GD::Image->new("$palette_file")
or die "Failed to create palette image from '$palette_file'";
my %source = map { $_ => $source->$_ } qw(width height);
my %palette = map { $_ => $palette->$_ } qw(width height);
my ($frame_prefix) = ($source_file->basename =~ /\A([^.]+)/);
unless (
(my $source_area = $source{width} * $source{height}) <=
(my $palette_area = $palette{width} * $source{height})
) {
die "Source area ($source_area) is greater than palette area ($palette_area)";
}
my ($last_frame, $png) = recreate_source_image_from_palette(
\%source,
get_source_pixels( get_pixels_by_color($source, \%source) ),
get_palette_colors( get_pixels_by_color($palette, \%palette) ),
sub { save_frame($frame_prefix, @_) }
);
save_frame($frame_prefix, $last_frame, $png);
return;
}
sub save_frame {
my $frame_prefix = shift;
my $frame = shift;
my $png = shift;
file(
sprintf("${frame_prefix}-%d.png", $frame)
)->spew(iomode => '>:raw', $$png);
return;
}
sub recreate_source_image_from_palette {
my $dim = shift;
my $source_pixels = shift;
my $palette_colors = shift;
my $callback = shift;
my $frame = 0;
my %colors;
$colors{$_} = undef for @$palette_colors;
my $gd = GD::Image->new($dim->{width}, $dim->{height}, 1);
for my $x (keys %colors) {
$colors{$x} = $gd->colorAllocate(unpack_rgb($x));
}
my $period = sprintf '%.0f', @$source_pixels / $ANIMATION_FRAMES;
for my $i (0 .. $#$source_pixels) {
$gd->setPixel(
@{ $source_pixels->[$i] },
$colors{ $palette_colors->[$i] }
);
if ($i % $period == 0) {
$callback->($frame, \ $gd->png);
$frame += 1;
}
}
return ($frame, \ $gd->png);
}
sub get_palette_colors { [ map sprintf('%08X', $_->[$COLOR]), @{ $_[0] } ] }
sub get_source_pixels { [ map $_->[$COORDINATES], @{ $_[0] } ] }
sub get_pixels_by_color {
my $gd = shift;
my $dim = shift;
return [
sort { $a->[$COLOR] <=> $b->[$COLOR] }
map {
my $y = $_;
map {
[ pack_rgb( $gd->rgb( $gd->getPixel($_, $y) ) ), [$_, $y] ];
} 0 .. $dim->{width}
} 0 .. $dim->{height}
];
}
sub pack_rgb { $_[0] << 16 | $_[1] << 8 | $_[2] }
sub unpack_rgb {
my ($r, $g, $b) = map $MASK{$_} & hex($_[0]), qw(RED GREEN BLUE);
return ($r >> 16, $g >> 8, $b);
}
Output
Scream using Starry Night palette

American Gothic using Mona Lisa colors

Mona Lisa using Scream colors

River using Marbles colors

Animations
I was lazy, so I put the animations on YouTube: Mona Lisa using colors from Starry Night and American Gothic using colors from Mona Lisa .
\$\endgroup\$
\$\begingroup\$
Python
Figured I'd take this little opportunity to take up code golf and use it as an excuse to work on my Python chops since it's been coming up more often at work these days. I ran through a couple of algorithms, including a few with O(n^2) and O(nlog(n)) time to try and optimize the colors, but it became very apparent that this was both computationally expensive and actually had very little effect on the apparent result. So below is a take I made on things that works in O(n) time (basically instantaneously on my system) that gets the most important visual element (luminance) as right as is reasonable and lets the chroma land where it may.
from PIL import Image
def check(palette, copy):
palette = sorted(palette.getdata())
copy = sorted(copy.getdata())
print "Master says it's good!" if copy == palette else "The master disapproves."
def GetLuminance(pixel):
# Extract the pixel channel data
b, g, r = pixel
# and used the standard luminance curve to get luminance.
return .3*r+.59*g+.11*b
print "Putting pixels on the palette..."
# Open up the image and get all of the pixels out of it. (Memory intensive!)
palette = Image.open("2.png").convert(mode="RGB")
pixelsP = [] # Allocate the array
width,height = palette.size # Unpack the image size
for y in range(height): # Scan the lines
for x in range(width): # within the line, scan the pixels
curpixel = palette.getpixel((x,y)) # get the pixel
pixelsP.append((GetLuminance(curpixel),curpixel)) # and add a (luminance, color) tuple to the array.
# sort the pixels by the calculated luminescence
pixelsP.sort()
print "Getting the reference picture..."
# Open up the image and get all of the pixels out of it. (Memory intensive!)
source = Image.open("6.png").convert(mode="RGB")
pixelsR = [] # Allocate the array
width,height = source.size # Unpack the image size
for y in range(height): # Scan the lines
for x in range(width): # within the line, scan the pixels
curpixel = source.getpixel((x,y)) # get the pixel
pixelsR.append((GetLuminance(curpixel),(x,y))) # and add a (luminance, position) tuple
# Sort the Reference pixels by luminance too
pixelsR.sort()
# Now for the neat observation. Luminance matters more to humans than chromanance,
# given this then we want to match luminance as well as we can. However, we have
# a finite luminance distribution to work with. Since we can't change that, it's best
# just to line the two images up, sorted by luminance, and just simply assign the
# luminance directly. The chrominance will be all kinds of whack, but fixing that
# by way of loose sorting possible chrominance errors takes this algorithm from O(n)
# to O(n^2), which just takes forever (trust me, I've tried it.)
print "Painting reference with palette..."
for p in range(len(pixelsP)): # For each pixel in the palette
pl,pixel = pixelsP[p] # Grab the pixel from the palette
l,cord = pixelsR[p] # Grab the location from the reference
source.putpixel(cord,pixel) # and assign the pallet pixel to the refrence pixels place
print "Applying fixative..."
# save out the result.
source.save("o.png","PNG")
print "Handing it to the master to see if he approves..."
check(palette, source)
print "Done!"
The end result has some neat consequences. However, if the images have wildly different luminance distributions, there's not much that can be done without getting advanced and dithering, which might be an interesting thing to do at some point, but is excluded here as a matter of brevity.
Everything -> Mona Lisa

Mona Lisa -> Spheres

\$\endgroup\$
\$\begingroup\$
Mathematica - random permutations
Idea
Select two pixels in the source image and check if the error to the destination image would decrease if theses two pixels would be swapped. We add a small random number (-d|+d) to the result to avoid local minima. Repeat. For speed do this with 10000 pixel at once.
It is a bit like a Markov random chain. It would probably be good to decrease the randomness during the optimization process similar to simulated annealing.
Code
colorSpace = "RGB";
\[Delta] = 0.05;
ClearAll[loadImgur, imgToList, listToImg, improveN, err, rearrange, \
rearrangeImg]
loadImgur[tag_] :=
RemoveAlphaChannel@
Import["https://i.sstatic.net/" <> tag <> ".png"]
imgToList[img_] := Flatten[ImageData[ColorConvert[img, colorSpace]], 1]
listToImg[u_, w_] := Image[Partition[u, w], ColorSpace -> colorSpace]
err[{x_, y_, z_}] := x^2 + y^2 + z^2
improveN[a_, t_, n_] := Block[{i, j, ai, aj, ti, tj},
{i, j} = Partition[RandomSample[Range@Length@a, 2 n], n];
ai = a[[i]];
aj = a[[j]];
ti = t[[i]];
tj = t[[j]];
ReplacePart[
a, (#1 -> #3) & @@@
Select[Transpose[{i,
err /@ (ai - ti) + err /@ (aj - tj) - err /@ (ai - tj) -
err /@ (aj - ti) + RandomReal[\[Delta]^2 {-1, +1}, n], aj}], #[[2]] > 0 &]]
]
rearrange[ua_, ub_, iterations_: 100] := Block[{tmp = ua},
Do[tmp = improveN[tmp, ub, Floor[.1 Length@ua]];, {i, iterations}];
tmp]
rearrangeImg[a_, b_, iterations_: 100] := With[{imgdst = loadImgur[b]},
listToImg[rearrange[
RandomSample@imgToList@loadImgur[a],
imgToList@imgdst, iterations], First@ImageDimensions@imgdst]]
rearrangeImg["JXgho","itzIe"]
Results
Gothic to Mona Lisa. Left: Using LAB color space (delta=0). Right: Using RBG color space (delta=0)

Gothic to Mona Lisa. Left: RGB color space, delta=0.05. Right: RGB color space, delta=0.15.

The following images show animations for around 3,500,000 swaps with RGB color space and delta=0.

\$\endgroup\$
1
\$\begingroup\$
Processing
The source and palette are shown side-by-side, and there is an animation of the pixels being taken from the palette;
In the line int i = chooseIndexIncremental();, you can change the chooseIndex* functions to see the selection order of the pixels.
int scanRate = 20; // pixels per frame
// image filenames
String source = "N6IGO.png";
String palette = "JXgho.png";
PImage src, pal, res;
int area;
int[] lut;
boolean[] processed;
boolean[] taken;
int count = 0;
void start() {
//size(800, 600);
src = loadImage(source);
pal = loadImage(palette);
size(src.width + pal.width, max(src.height, pal.height));
src.loadPixels();
pal.loadPixels();
int areaSrc = src.pixels.length;
int areaPal = pal.pixels.length;
if (areaSrc != areaPal) {
println("Areas mismatch: src: " + areaSrc + ", pal: " + areaPal);
return;
}
area = areaSrc;
println("Area: " + area);
lut = new color[area];
taken = new boolean[area];
processed = new boolean[area];
randomSeed(1);
}
void draw() {
background(0);
image(src, 0, 0);
image(pal, src.width, 0);
for (int k = 0; k < scanRate; k ++)
if (count < area) {
// choose from chooseIndexRandom, chooseIndexSkip and chooseIndexIncremental
int i = chooseIndexIncremental();
process(i);
processed[i] = true;
count ++;
}
}
int chooseIndexRandom() {
int i = 0;
do i = (int) random(area); while (processed[i]);
return i;
}
int chooseIndexSkip(int n) {
int i = (n * count) % area;
while (processed[i] || i >= area) i = (int) random(area);
return i;
}
int chooseIndexIncremental() {
return count;
}
void process(int i) {
lut[i] = findPixel(src.pixels[i]);
taken[lut[i]] = true;
src.loadPixels();
src.pixels[i] = pal.pixels[lut[i]];
src.updatePixels();
pal.loadPixels();
pal.pixels[lut[i]] = color(0);
pal.updatePixels();
stroke(src.pixels[i]);
int sy = i / src.width;
int sx = i % src.width;
int j = lut[i];
int py = j / pal.width;
int px = j % pal.width;
line(sx, sy, src.width + px, py);
}
int findPixel(color c) {
int best;
do best = (int) random(area); while (taken[best]);
float bestDist = colorDist(c, pal.pixels[best]);
for (int k = 0; k < area; k ++) {
if (taken[k]) continue;
color c1 = pal.pixels[k];
float dist = colorDist(c, c1);
if (dist < bestDist) {
bestDist = dist;
best = k;
}
}
return best;
}
float colorDist(color c1, color c2) {
return S(red(c1) - red(c2)) + S(green(c1) - green(c2)) + S(blue(c1) - blue(c2));
}
float S(float x) { return x * x; }
American Gothic -> Mona Lisa, incremental

American Gothic -> Mona Lisa, random

\$\endgroup\$
1
\$\begingroup\$
C-Sharp
No new/exciting ideas, but I thought I'd give it a try. Simply sorts the pixels, prioritizing brightness over saturation over hue. The code is reasonably short though, for what its worth.
EDIT: Added an even shorter version
using System;
using System.Drawing;
using System.Collections.Generic;
class Program
{
static void Main(string[] args)
{
Bitmap sourceImg = new Bitmap("TheScream.png"),
arrangImg = new Bitmap("StarryNight.png"),
destImg = new Bitmap(arrangImg.Width, arrangImg.Height);
List<Pix> sourcePixels = new List<Pix>(), arrangPixels = new List<Pix>();
for (int i = 0; i < sourceImg.Width; i++)
for (int j = 0; j < sourceImg.Height; j++)
sourcePixels.Add(new Pix(sourceImg.GetPixel(i, j), i, j));
for (int i = 0; i < arrangImg.Width; i++)
for (int j = 0; j < arrangImg.Height; j++)
arrangPixels.Add(new Pix(arrangImg.GetPixel(i, j), i, j));
sourcePixels.Sort();
arrangPixels.Sort();
for (int i = 0; i < arrangPixels.Count; i++)
destImg.SetPixel(arrangPixels[i].x,
arrangPixels[i].y,
sourcePixels[i].col);
destImg.Save("output.png");
}
}
class Pix : IComparable<Pix>
{
public Color col;
public int x, y;
public Pix(Color col, int x, int y)
{
this.col = col;
this.x = x;
this.y = y;
}
public int CompareTo(Pix other)
{
return(int)(255 * 255 * 255 * (col.GetBrightness() - other.col.GetBrightness())
+ (255 * (col.GetHue() - other.col.GetHue()))
+ (255 * 255 * (col.GetSaturation() - other.col.GetSaturation())));
}
}
Sample:
+

=

\$\endgroup\$
\$\begingroup\$
Java
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import javax.imageio.ImageIO;
/**
*
* @author Quincunx
*/
public class PixelRearrangerMK2 {
public static void main(String[] args) throws IOException {
BufferedImage source = ImageIO.read(resource("Raytraced Spheres.png"));
BufferedImage palette = ImageIO.read(resource("American Gothic.png"));
BufferedImage result = rearrange(source, palette);
ImageIO.write(result, "png", resource("result.png"));
validate(palette, result);
}
public static BufferedImage rearrange(BufferedImage source, BufferedImage palette) {
List<Color> sColors = Color.getColors(source);
List<Color> pColors = Color.getColors(palette);
Collections.sort(sColors);
Collections.sort(pColors);
BufferedImage result = new BufferedImage(source.getWidth(), source.getHeight(), BufferedImage.TYPE_INT_RGB);
Iterator<Color> sIter = sColors.iterator();
Iterator<Color> pIter = pColors.iterator();
while (sIter.hasNext()) {
Color s = sIter.next();
Color p = pIter.next();
result.setRGB(s.x, s.y, p.rgb);
}
return result;
}
public static class Color implements Comparable {
int x, y;
int rgb;
double hue;
private int r, g, b;
public Color(int x, int y, int rgb) {
this.x = x;
this.y = y;
this.rgb = rgb;
r = (rgb & 0xFF0000) >> 16;
g = (rgb & 0x00FF00) >> 8;
b = rgb & 0x0000FF;
hue = Math.atan2(Math.sqrt(3) * (g - b), 2 * r - g - b);
}
@Override
public int compareTo(Object o) {
Color c = (Color) o;
return hue < c.hue ? -1 : hue == c.hue ? 0 : 1;
}
public static List<Color> getColors(BufferedImage img) {
List<Color> result = new ArrayList<>();
for (int y = 0; y < img.getHeight(); y++) {
for (int x = 0; x < img.getWidth(); x++) {
result.add(new Color(x, y, img.getRGB(x, y)));
}
}
return result;
}
}
//Validation and util methods follow
public static void validate(BufferedImage palette, BufferedImage result) {
List<Integer> paletteColors = getColorsAsInt(palette);
List<Integer> resultColors = getColorsAsInt(result);
Collections.sort(paletteColors);
Collections.sort(resultColors);
System.out.println(paletteColors.equals(resultColors));
}
public static List<Integer> getColorsAsInt(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
List<Integer> colors = new ArrayList<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
colors.add(img.getRGB(x, y));
}
}
Collections.sort(colors);
return colors;
}
public static File resource(String fileName) {
return new File(System.getProperty("user.dir") + "/Resources/" + fileName);
}
}
Here's a completely different idea. I create a list of the colors of each image, then I sort according to hue, which is computed by wikipedia's formula:

Unlike my other answer, this is extremely fast. It takes about 2 seconds, including the validation.
The result is some abstract art. Here are some images (mouseover to see to/from):

\$\endgroup\$
6
\$\begingroup\$
Python
Well, I decided I might as well post my solution, since I spent the time to do it. Essentially, what I figured I would do is get the raw pixel data of the images, sort the the data by brightness, and then put the values of the same index into a new image. I changed my mind about the brightness, and used luminance instead. I got some pretty good results with this.
from PIL import Image
from optparse import OptionParser
def key_func(arr):
# Sort the pixels by luminance
r = 0.2126*arr[0] + 0.7152*arr[1] + 0.0722*arr[2]
return r
def main():
# Parse options from the command line
parser = OptionParser()
parser.add_option("-p", "--pixels", dest="pixels",
help="use pixels from FILE", metavar="FILE")
parser.add_option("-i", "--input", dest="input", metavar="FILE",
help="recreate FILE")
parser.add_option("-o", "--out", dest="output", metavar="FILE",
help="output to FILE", default="output.png")
(options, args) = parser.parse_args()
if not options.pixels or not options.input:
raise Exception("Missing arguments. See help for more info.")
# Load the images
im1 = Image.open(options.pixels)
im2 = Image.open(options.input)
# Get the images into lists
px1 = list(im1.getdata())
px2 = list(im2.getdata())
w1, h1 = im1.size
w2, h2 = im2.size
if w1*h1 != w2*h2:
raise Exception("Images must have the same number of pixels.")
# Sort the pixels lists by luminance
px1_s = sorted(px1, key=key_func)
px2_s = sorted(px2, key=key_func)
# Create an array of nothing but black pixels
arr = [(0, 0, 0)]*w2*h2
# Create a dict that contains a list of locations with pixel value as key
# This speeds up the process a lot, since before it was O(n^2)
locations_cache = {}
for index, val in enumerate(px2):
v = str(val)
if v in locations_cache:
locations_cache[v].append(index)
else:
locations_cache[v] = [index]
# Loop through each value of the sorted pixels
for index, val in enumerate(px2_s):
# Find the original location of the pixel
# v = px2.index(val)
v = locations_cache[str(val)].pop(0)
# Set the value of the array at the given location to the pixel of the
# equivalent luminance from the source image
arr[v] = px1_s[index]
# v2 = px1.index(px1_s[index])
# Set the value of px2 to an arbitrary value outside of the RGB range
# This prevents duplicate pixel locations
# I would use "del px2[v]", but it wouldn't work for some reason
px2[v] = (512, 512, 512)
# px1[v2] = (512, 512, 512)
# Print the percent progress
print("%f%%" % (index/len(px2)*100))
"""if index % 500 == 0 or index == len(px2_s)-1:
if h1 > h2:
size = (w1+w2, h1)
else:
size = (w1+w2, h2)
temp_im1 = Image.new("RGB", im2.size)
temp_im1.putdata(arr)
temp_im2 = Image.new("RGB", im1.size)
temp_im2.putdata(px1)
temp_im3 = Image.new("RGB", size)
temp_im3.paste(temp_im1, (0, 0))
temp_im3.paste(temp_im2, (w2, 0))
temp_im3.save("still_frames/img_%04d.png" % (index/500))"""
# Save the image
im3 = Image.new('RGB', im2.size)
im3.putdata(arr)
im3.save(options.output)
if __name__ == '__main__':
main()
Results
I was pretty happy with the results. It seemed to work consistently for all of the images I put through it.
Starry Night with Scream Pixels

Starry Night with Rainbow Pixels

Rainbow with Starry Night Pixels

Mona Lisa with Scream Pixels

River with Starry Night Pixels

Mona Lisa with American Gothic Pixels

Mustang with Chevy Pixels
I should probably have scaled the images down given my hardware constraints, but oh well.

Chevy with Mustang Pixels

River with Rainbow Pixels

Mona Lisa with Rainbow Pixels

American Gothic with Rainbow Pixels

Update I've added a few more pictures, and here are a couple of animations. The first shows how my method worked, and the second is using the script @Calvin'sHobbies posted.


Update 2 I added a dict storing the indices of pixels by their color. This made the script way more efficient. To see the original, check the revision history.
\$\endgroup\$
\$\begingroup\$
C++11
In the end, I settled on a relatively simple deterministic greedy algorithm. This is single threaded, but runs in a hair over 4 seconds on my machine.
The basic algorithm works by sorting all the pixels in both the palette and the target image by decreasing luminance (the L of Lab*). Then, for each of those ordered target pixels it searches for the closest match in the first 75 entries of the palette, using the square of the CIEDE2000 distance metric with the luminance of the palette colors clamped to that of the target. (For implementation and debugging of CIEDE2000, this page was very helpful). The best match is then removed from the palette, assigned to the result and the algorithm goes on to the next lightest pixel in the target image.
By using sorted luminance for both the target and the palette, we ensure that the overall luminance (the most visually salient element) of the result matches the target as closely as possible. Using a small window of 75 entries gives it a good shot at finding a matching color of about the right brightness, if there is one. If there isn't one, then the color will be off, but at least the brightness should be consistent. As a result, it degrades fairly gracefully when the palette colors don't match well.
Code
To compile this you will need the ImageMagick++ development libraries. A small CMake file to compile it is also included below.
palette.cpp
#include <Magick++.h>
#include <algorithm>
#include <functional>
#include <utility>
#include <set>
using namespace std;
using namespace Magick;
struct Lab
{
PixelPacket rgb;
float L, a, b;
explicit Lab(
PixelPacket rgb )
: rgb( rgb )
{
auto R_srgb = static_cast< float >( rgb.red ) / QuantumRange;
auto G_srgb = static_cast< float >( rgb.green ) / QuantumRange;
auto B_srgb = static_cast< float >( rgb.blue ) / QuantumRange;
auto R_lin = R_srgb < 0.04045f ? R_srgb / 12.92f :
powf( ( R_srgb + 0.055f ) / 1.055f, 2.4f );
auto G_lin = G_srgb < 0.04045f ? G_srgb / 12.92f :
powf( ( G_srgb + 0.055f ) / 1.055f, 2.4f );
auto B_lin = B_srgb < 0.04045f ? B_srgb / 12.92f :
powf( ( B_srgb + 0.055f ) / 1.055f, 2.4f );
auto X = 0.4124f * R_lin + 0.3576f * G_lin + 0.1805f * B_lin;
auto Y = 0.2126f * R_lin + 0.7152f * G_lin + 0.0722f * B_lin;
auto Z = 0.0193f * R_lin + 0.1192f * G_lin + 0.9502f * B_lin;
auto X_norm = X / 0.9505f;
auto Y_norm = Y / 1.0000f;
auto Z_norm = Z / 1.0890f;
auto fX = ( X_norm > 216.0f / 24389.0f ?
powf( X_norm, 1.0f / 3.0f ) :
X_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
auto fY = ( Y_norm > 216.0f / 24389.0f ?
powf( Y_norm, 1.0f / 3.0f ) :
Y_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
auto fZ = ( Z_norm > 216.0f / 24389.0f ?
powf( Z_norm, 1.0f / 3.0f ) :
Z_norm * ( 841.0f / 108.0f ) + 4.0f / 29.0f );
L = 116.0f * fY - 16.0f;
a = 500.0f * ( fX - fY );
b = 200.0f * ( fY - fZ );
}
bool operator<(
Lab const that ) const
{
return ( L > that.L ? true :
L < that.L ? false :
a > that.a ? true :
a < that.a ? false :
b > that.b );
}
Lab clampL(
Lab const that ) const
{
auto result = Lab( *this );
if ( result.L > that.L )
result.L = that.L;
return result;
}
float cieDe2000(
Lab const that,
float const k_L = 1.0f,
float const k_C = 1.0f,
float const k_H = 1.0f ) const
{
auto square = []( float value ){ return value * value; };
auto degs = []( float rad ){ return rad * 180.0f / 3.14159265359f; };
auto rads = []( float deg ){ return deg * 3.14159265359f / 180.0f; };
auto C_1 = hypot( a, b );
auto C_2 = hypot( that.a, that.b );
auto C_bar = ( C_1 + C_2 ) * 0.5f;
auto C_bar_7th = square( square( C_bar ) ) * square( C_bar ) * C_bar;
auto G = 0.5f * ( 1.0f - sqrtf( C_bar_7th / ( C_bar_7th + 610351562.0f ) ) );
auto a_1_prime = ( 1.0f + G ) * a;
auto a_2_prime = ( 1.0f + G ) * that.a;
auto C_1_prime = hypot( a_1_prime, b );
auto C_2_prime = hypot( a_2_prime, that.b );
auto h_1_prime = C_1_prime == 0.0f ? 0.0f : degs( atan2f( b, a_1_prime ) );
if ( h_1_prime < 0.0f )
h_1_prime += 360.0f;
auto h_2_prime = C_2_prime == 0.0f ? 0.0f : degs( atan2f( that.b, a_2_prime ) );
if ( h_2_prime < 0.0f )
h_2_prime += 360.0f;
auto delta_L_prime = that.L - L;
auto delta_C_prime = C_2_prime - C_1_prime;
auto delta_h_prime =
C_1_prime * C_2_prime == 0.0f ? 0 :
fabs( h_2_prime - h_1_prime ) <= 180.0f ? h_2_prime - h_1_prime :
h_2_prime - h_1_prime > 180.0f ? h_2_prime - h_1_prime - 360.0f :
h_2_prime - h_1_prime + 360.0f;
auto delta_H_prime = 2.0f * sqrtf( C_1_prime * C_2_prime ) *
sinf( rads( delta_h_prime * 0.5f ) );
auto L_bar_prime = ( L + that.L ) * 0.5f;
auto C_bar_prime = ( C_1_prime + C_2_prime ) * 0.5f;
auto h_bar_prime =
C_1_prime * C_2_prime == 0.0f ? h_1_prime + h_2_prime :
fabs( h_1_prime - h_2_prime ) <= 180.0f ? ( h_1_prime + h_2_prime ) * 0.5f :
h_1_prime + h_2_prime < 360.0f ? ( h_1_prime + h_2_prime + 360.0f ) * 0.5f :
( h_1_prime + h_2_prime - 360.0f ) * 0.5f;
auto T = ( 1.0f
- 0.17f * cosf( rads( h_bar_prime - 30.0f ) )
+ 0.24f * cosf( rads( 2.0f * h_bar_prime ) )
+ 0.32f * cosf( rads( 3.0f * h_bar_prime + 6.0f ) )
- 0.20f * cosf( rads( 4.0f * h_bar_prime - 63.0f ) ) );
auto delta_theta = 30.0f * expf( -square( ( h_bar_prime - 275.0f ) / 25.0f ) );
auto C_bar_prime_7th = square( square( C_bar_prime ) ) *
square( C_bar_prime ) * C_bar_prime;
auto R_C = 2.0f * sqrtf( C_bar_prime_7th / ( C_bar_prime_7th + 610351562.0f ) );
auto S_L = 1.0f + ( 0.015f * square( L_bar_prime - 50.0f ) /
sqrtf( 20.0f + square( L_bar_prime - 50.0f ) ) );
auto S_C = 1.0f + 0.045f * C_bar_prime;
auto S_H = 1.0f + 0.015f * C_bar_prime * T;
auto R_T = -sinf( rads( 2.0f * delta_theta ) ) * R_C;
return (
square( delta_L_prime / ( k_L * S_L ) ) +
square( delta_C_prime / ( k_C * S_C ) ) +
square( delta_H_prime / ( k_H * S_H ) ) +
R_T * delta_C_prime * delta_H_prime / ( k_C * S_C * k_H * S_H ) );
}
};
Image read_image(
char * const filename )
{
auto result = Image( filename );
result.type( TrueColorType );
result.matte( true );
result.backgroundColor( Color( 0, 0, 0, QuantumRange ) );
return result;
}
template< typename T >
multiset< T > map_image(
Image const &image,
function< T( unsigned, PixelPacket ) > const transform )
{
auto width = image.size().width();
auto height = image.size().height();
auto result = multiset< T >();
auto pixels = image.getConstPixels( 0, 0, width, height );
for ( auto index = 0; index < width * height; ++index, ++pixels )
result.emplace( transform( index, *pixels ) );
return result;
}
int main(
int argc,
char **argv )
{
auto palette = map_image(
read_image( argv[ 1 ] ),
function< Lab( unsigned, PixelPacket ) >(
[]( unsigned index, PixelPacket rgb ) {
return Lab( rgb );
} ) );
auto target_image = read_image( argv[ 2 ] );
auto target_colors = map_image(
target_image,
function< pair< Lab, unsigned >( unsigned, PixelPacket ) >(
[]( unsigned index, PixelPacket rgb ) {
return make_pair( Lab( rgb ), index );
} ) );
auto pixels = target_image.setPixels(
0, 0,
target_image.size().width(),
target_image.size().height() );
for ( auto &&target : target_colors )
{
auto best_color = palette.begin();
auto best_difference = 1.0e38f;
auto count = 0;
for ( auto candidate = palette.begin();
candidate != palette.end() && count < 75;
++candidate, ++count )
{
auto difference = target.first.cieDe2000(
candidate->clampL( target.first ) );
if ( difference < best_difference )
{
best_color = candidate;
best_difference = difference;
}
}
pixels[ target.second ] = best_color->rgb;
palette.erase( best_color );
}
target_image.syncPixels();
target_image.write( argv[ 3 ] );
return 0;
}
CMakeList.txt
cmake_minimum_required( VERSION 2.8.11 )
project( palette )
add_executable( palette palette.cpp)
find_package( ImageMagick COMPONENTS Magick++ )
if( ImageMagick_FOUND )
include_directories( ${ImageMagick_INCLUDE_DIRS} )
target_link_libraries( palette ${ImageMagick_LIBRARIES} )
endif( ImageMagick_FOUND )
Result
The full album is here. Of the results below, my favorites are probably American Gothic with the Mona Lisa palette, and Starry Night with the Spheres palette.
American Gothic Palette

Mona Lisa Palette

River Palette

The Scream Palette

Spheres Palette

Starry Night Palette

\$\endgroup\$
1
\$\begingroup\$
C++
Not the shortest code, but generates the answer 'instantly' despite being single-threaded and un-optimized. I am pleased with the results.
I generate two sorted lists of pixels, one for each image, and the sorting is based on a weighted value of 'brightness'. I use 100% green, 50% red and 10% blue to calculate the brightness, weighting it for the human eye (more or less). I then swap pixels in the source image for their same indexed pixel in the palette image, and write out the destination image.
I use the FreeImage library to read/write the image files.
Code
/* Inputs: 2 image files of same area
Outputs: image1 made from pixels of image2*/
#include <iostream>
#include <stdlib.h>
#include "FreeImage.h"
#include <vector>
#include <algorithm>
class pixel
{
public:
int x, y;
BYTE r, g, b;
float val; //color value; weighted 'brightness'
};
bool sortByColorVal(const pixel &lhs, const pixel &rhs) { return lhs.val > rhs.val; }
FIBITMAP* GenericLoader(const char* lpszPathName, int flag)
{
FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
// check the file signature and deduce its format
// (the second argument is currently not used by FreeImage)
fif = FreeImage_GetFileType(lpszPathName, 0);
if (fif == FIF_UNKNOWN)
{
// no signature ?
// try to guess the file format from the file extension
fif = FreeImage_GetFIFFromFilename(lpszPathName);
}
// check that the plugin has reading capabilities ...
if ((fif != FIF_UNKNOWN) && FreeImage_FIFSupportsReading(fif))
{
// ok, let's load the file
FIBITMAP *dib = FreeImage_Load(fif, lpszPathName, flag);
// unless a bad file format, we are done !
return dib;
}
return NULL;
}
bool GenericWriter(FIBITMAP* dib, const char* lpszPathName, int flag)
{
FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
BOOL bSuccess = FALSE;
if (dib)
{
// try to guess the file format from the file extension
fif = FreeImage_GetFIFFromFilename(lpszPathName);
if (fif != FIF_UNKNOWN)
{
// check that the plugin has sufficient writing and export capabilities ...
WORD bpp = FreeImage_GetBPP(dib);
if (FreeImage_FIFSupportsWriting(fif) && FreeImage_FIFSupportsExportBPP(fif, bpp))
{
// ok, we can save the file
bSuccess = FreeImage_Save(fif, dib, lpszPathName, flag);
// unless an abnormal bug, we are done !
}
}
}
return (bSuccess == TRUE) ? true : false;
}
void FreeImageErrorHandler(FREE_IMAGE_FORMAT fif, const char *message)
{
std::cout << std::endl << "*** ";
if (fif != FIF_UNKNOWN)
{
std::cout << "ERROR: " << FreeImage_GetFormatFromFIF(fif) << " Format" << std::endl;
}
std::cout << message;
std::cout << " ***" << std::endl;
}
FIBITMAP* Convert24BPP(FIBITMAP* dib)
{
if (FreeImage_GetBPP(dib) == 24) return dib;
FIBITMAP *dib2 = FreeImage_ConvertTo24Bits(dib);
FreeImage_Unload(dib);
return dib2;
}
// ----------------------------------------------------------
int main(int argc, char **argv)
{
// call this ONLY when linking with FreeImage as a static library
#ifdef FREEIMAGE_LIB
FreeImage_Initialise();
#endif
FIBITMAP *src = NULL, *pal = NULL;
int result = EXIT_FAILURE;
// initialize my own FreeImage error handler
FreeImage_SetOutputMessage(FreeImageErrorHandler);
// print version
std::cout << "FreeImage version : " << FreeImage_GetVersion() << std::endl;
if (argc != 4)
{
std::cout << "USAGE : Pic2Pic <source image> <palette image> <output file name>" << std::endl;
return EXIT_FAILURE;
}
// Load the src image
src = GenericLoader(argv[1], 0);
if (src)
{
// load the palette image
pal = GenericLoader(argv[2], 0);
if (pal)
{
//compare areas
// if(!samearea) return EXIT_FAILURE;
unsigned int width_src = FreeImage_GetWidth(src);
unsigned int height_src = FreeImage_GetHeight(src);
unsigned int width_pal = FreeImage_GetWidth(pal);
unsigned int height_pal = FreeImage_GetHeight(pal);
if (width_src * height_src != width_pal * height_pal)
{
std::cout << "ERROR: source and palette images do not have the same pixel area." << std::endl;
result = EXIT_FAILURE;
}
else
{
//go to work!
//first make sure everything is 24 bit:
src = Convert24BPP(src);
pal = Convert24BPP(pal);
//retrieve the image data
BYTE *bits_src = FreeImage_GetBits(src);
BYTE *bits_pal = FreeImage_GetBits(pal);
//make destination image
FIBITMAP *dst = FreeImage_ConvertTo24Bits(src);
BYTE *bits_dst = FreeImage_GetBits(dst);
//make a vector of all the src pixels that we can sort by color value
std::vector<pixel> src_pixels;
for (unsigned int y = 0; y < height_src; ++y)
{
for (unsigned int x = 0; x < width_src; ++x)
{
pixel p;
p.x = x;
p.y = y;
p.b = bits_src[y*width_src * 3 + x * 3];
p.g = bits_src[y*width_src * 3 + x * 3 + 1];
p.r = bits_src[y*width_src * 3 + x * 3 + 2];
//calculate color value using a weighted brightness for each channel
//p.val = 0.2126f * p.r + 0.7152f * p.g + 0.0722f * p.b; //from http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html
p.val = 0.5f * p.r + p.g + 0.1f * p.b;
src_pixels.push_back(p);
}
}
//sort by color value
std::sort(src_pixels.begin(), src_pixels.end(), sortByColorVal);
//make a vector of all palette pixels we can use
std::vector<pixel> pal_pixels;
for (unsigned int y = 0; y < height_pal; ++y)
{
for (unsigned int x = 0; x < width_pal; ++x)
{
pixel p;
p.b = bits_pal[y*width_pal * 3 + x * 3];
p.g = bits_pal[y*width_pal * 3 + x * 3 + 1];
p.r = bits_pal[y*width_pal * 3 + x * 3 + 2];
p.val = 0.5f * p.r + p.g + 0.1f * p.b;
pal_pixels.push_back(p);
}
}
//sort by color value
std::sort(pal_pixels.begin(), pal_pixels.end(), sortByColorVal);
//for each src pixel, match it with same index palette pixel and copy to destination
for (unsigned int i = 0; i < width_src * height_src; ++i)
{
bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3] = pal_pixels[i].b;
bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3 + 1] = pal_pixels[i].g;
bits_dst[src_pixels[i].y * width_src * 3 + src_pixels[i].x * 3 + 2] = pal_pixels[i].r;
}
// Save the destination image
bool bSuccess = GenericWriter(dst, argv[3], 0);
if (!bSuccess)
{
std::cout << "ERROR: unable to save " << argv[3] << std::endl;
std::cout << "This format does not support 24-bit images" << std::endl;
result = EXIT_FAILURE;
}
else result = EXIT_SUCCESS;
FreeImage_Unload(dst);
}
// Free pal
FreeImage_Unload(pal);
}
// Free src
FreeImage_Unload(src);
}
#ifdef FREEIMAGE_LIB
FreeImage_DeInitialise();
#endif
if (result == EXIT_SUCCESS) std::cout << "SUCCESS!" << std::endl;
else std::cout << "FAILURE!" << std::endl;
return result;
}
Results
American Gothic using Mona Lisa palette
American Gothic using Rainbow palette
Mona Lisa using Scream palette
Mona Lisa using Rainbow palette
Scream using Starry Night palette
\$\endgroup\$
\$\begingroup\$
C#
The points are ordered in random walking, starting from the center. always get the closest color in the palette image. So the last pixels are somewhat very bad.
Results
Gothic Palette


And the american couple visitors from wikipedia

Mona Palette



Code:
I don't know why but code is pretty slow...
public class PixelExchanger
{
public class ProgressInfo
{
public readonly Pixel NewPixel;
public readonly int Percentage;
public ProgressInfo(Pixel newPixel, int percentage)
{
this.NewPixel = newPixel;
this.Percentage = percentage;
}
}
public class Pixel
{
public readonly int X;
public readonly int Y;
public readonly Color Color;
public Pixel(int x, int y, Color color)
{
this.X = x;
this.Y = y;
this.Color = color;
}
}
private static Random r = new Random(0);
private readonly Bitmap Pallete;
private readonly Bitmap Image;
private readonly int Width;
private readonly int Height;
private readonly Action<ProgressInfo> ProgressCallback;
private System.Drawing.Image image1;
private System.Drawing.Image image2;
private int Area { get { return Width * Height; } }
public PixelExchanger(Bitmap pallete, Bitmap image, Action<ProgressInfo> progressCallback = null)
{
this.Pallete = pallete;
this.Image = image;
this.ProgressCallback = progressCallback;
Width = image.Width;
Height = image.Height;
if (Area != pallete.Width * pallete.Height)
throw new ArgumentException("Image and Pallete have different areas!");
}
public Bitmap DoWork()
{
var array = GetColorArray();
var map = GetColorMap(Image);
var newMap = Go(array, map);
var bm = new Bitmap(map.Length, map[0].Length);
for (int i = 0; i < Width; i++)
{
for (int j = 0; j < Height; j++)
{
bm.SetPixel(i, j, newMap[i][j]);
}
}
return bm;
}
public Color[][] Go(List<Color> array, Color[][] map)
{
var centralPoint = new Point(Width / 2, Height / 2);
var q = OrderRandomWalking(centralPoint).ToArray();
Color[][] newMap = new Color[map.Length][];
for (int i = 0; i < map.Length; i++)
{
newMap[i] = new Color[map[i].Length];
}
double pointsDone = 0;
foreach (var p in q)
{
newMap[p.X][p.Y] = Closest(array, map[p.X][p.Y]);
pointsDone++;
if (ProgressCallback != null)
{
var percent = 100 * (pointsDone / (double)Area);
var progressInfo = new ProgressInfo(new Pixel(p.X, p.Y, newMap[p.X][p.Y]), (int)percent);
ProgressCallback(progressInfo);
}
}
return newMap;
}
private int[][] GetCardinals()
{
int[] nn = new int[] { -1, +0 };
// int[] ne = new int[] { -1, +1 };
int[] ee = new int[] { +0, +1 };
// int[] se = new int[] { +1, +1 };
int[] ss = new int[] { +1, +0 };
// int[] sw = new int[] { +1, -1 };
int[] ww = new int[] { +0, -1 };
// int[] nw = new int[] { -1, -1 };
var dirs = new List<int[]>();
dirs.Add(nn);
// dirs.Add(ne);
dirs.Add(ee);
// dirs.Add(se);
dirs.Add(ss);
// dirs.Add(sw);
dirs.Add(ww);
// dirs.Add(nw);
return dirs.ToArray();
}
private Color Closest(List<Color> array, Color c)
{
int closestIndex = -1;
int bestD = int.MaxValue;
int[] ds = new int[array.Count];
Parallel.For(0, array.Count, (i, state) =>
{
ds[i] = Distance(array[i], c);
if (ds[i] <= 50)
{
closestIndex = i;
state.Break();
}
else if (bestD > ds[i])
{
bestD = ds[i];
closestIndex = i;
}
});
var closestColor = array[closestIndex];
array.RemoveAt(closestIndex);
return closestColor;
}
private int Distance(Color c1, Color c2)
{
var r = Math.Abs(c1.R - c2.R);
var g = Math.Abs(c1.G - c2.G);
var b = Math.Abs(c1.B - c2.B);
var s = Math.Abs(c1.GetSaturation() - c1.GetSaturation());
return (int)s + r + g + b;
}
private HashSet<Point> OrderRandomWalking(Point p)
{
var points = new HashSet<Point>();
var dirs = GetCardinals();
var dir = new int[] { 0, 0 };
while (points.Count < Width * Height)
{
bool inWidthBound = p.X + dir[0] < Width && p.X + dir[0] >= 0;
bool inHeightBound = p.Y + dir[1] < Height && p.Y + dir[1] >= 0;
if (inWidthBound && inHeightBound)
{
p.X += dir[0];
p.Y += dir[1];
points.Add(p);
}
dir = dirs.Random(r);
}
return points;
}
private static Color[][] GetColorMap(Bitmap b1)
{
int hight = b1.Height;
int width = b1.Width;
Color[][] colorMatrix = new Color[width][];
for (int i = 0; i < width; i++)
{
colorMatrix[i] = new Color[hight];
for (int j = 0; j < hight; j++)
{
colorMatrix[i][j] = b1.GetPixel(i, j);
}
}
return colorMatrix;
}
private List<Color> GetColorArray()
{
var map = GetColorMap(Pallete);
List<Color> colors = new List<Color>();
foreach (var line in map)
{
colors.AddRange(line);
}
return colors;
}
}
\$\endgroup\$
2
\$\begingroup\$
C#
Compares colors by how far away they are. Starts from the center.
EDIT: Updated, now should be about 1.5x faster.
American Gothic

The Scream

Starry Night

Marbles

River

Also, here's the yellow Chevy:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing;
using System.Threading.Tasks;
using System.Diagnostics;
namespace ConsoleApplication1
{
class Pixel
{
public int X = 0;
public int Y = 0;
public Color Color = new Color();
public Pixel(int x, int y, Color clr)
{
Color = clr;
X = x;
Y = y;
}
public Pixel()
{
}
}
class Vector2
{
public int X = 0;
public int Y = 0;
public Vector2(int x, int y)
{
X = x;
Y = y;
}
public Vector2()
{
}
public double Diagonal()
{
return Math.Sqrt((X * X) + (Y * Y));
}
}
class ColorCollection
{
Dictionary<Color, int> dict = new Dictionary<Color, int>();
public ColorCollection()
{
}
public void AddColor(Color color)
{
if (dict.ContainsKey(color))
{
dict[color]++;
return;
}
dict.Add(color, 1);
}
public void UseColor(Color color)
{
if (dict.ContainsKey(color))
dict[color]--;
if (dict[color] < 1)
dict.Remove(color);
}
public Color FindBestColor(Color color)
{
Color ret = dict.First().Key;
int p = this.CalculateDifference(ret, color);
foreach (KeyValuePair<Color, int> pair in dict)
{
int points = CalculateDifference(pair.Key, color);
if (points < p)
{
ret = pair.Key;
p = points;
}
}
this.UseColor(ret);
return ret;
}
int CalculateDifference(Color c1, Color c2)
{
int ret = 0;
ret = ret + Math.Abs(c1.R - c2.R);
ret = ret + Math.Abs(c1.G - c2.G);
ret = ret + Math.Abs(c1.B - c2.B);
return ret;
}
}
class Program
{
static void Main(string[] args)
{
string img1 = "";
string img2 = "";
if (args.Length != 2)
{
Console.Write("Where is the first picture located? ");
img1 = Console.ReadLine();
Console.Write("Where is the second picture located? ");
img2 = Console.ReadLine();
}
else
{
img1 = args[0];
img2 = args[1];
}
Bitmap bmp1 = new Bitmap(img1);
Bitmap bmp2 = new Bitmap(img2);
Console.WriteLine("Getting colors....");
ColorCollection colors = GetColors(bmp1);
Console.WriteLine("Getting pixels....");
List<Pixel> pixels = GetPixels(bmp2);
int centerX = bmp2.Width / 2;
int centerY = bmp2.Height / 2;
pixels.Sort((p1, p2) =>
{
Vector2 p1_v = new Vector2(Math.Abs(p1.X - centerX), Math.Abs(p1.Y - centerY));
Vector2 p2_v = new Vector2(Math.Abs(p2.X - centerX), Math.Abs(p2.Y - centerY));
double d1 = p1_v.Diagonal();
double d2 = p2_v.Diagonal();
if (d1 > d2)
return 1;
else if (d1 == d2)
return 0;
return -1;
});
Console.WriteLine("Calculating...");
int k = 0;
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < pixels.Count; i++)
{
if (i % 100 == 0 && i != 0)
{
float percentage = ((float)i / (float)pixels.Count) * 100;
Console.WriteLine(percentage.ToString("0.00") + "% completed(" + i + "/" + pixels.Count + ")");
Console.SetCursorPosition(0, Console.CursorTop - 1);
}
Color set = colors.FindBestColor(pixels[i].Color);
pixels[i].Color = set;
k++;
}
sw.Stop();
Console.WriteLine("Saving...");
Bitmap result = WritePixelsToBitmap(pixels, bmp2.Width, bmp2.Height);
result.Save(img1 + ".png");
Console.WriteLine("Completed in " + sw.Elapsed.TotalSeconds + " seconds. Press a key to exit.");
Console.ReadKey();
}
static Bitmap WritePixelsToBitmap(List<Pixel> pixels, int width, int height)
{
Bitmap bmp = new Bitmap(width, height);
foreach (Pixel pixel in pixels)
{
bmp.SetPixel(pixel.X, pixel.Y, pixel.Color);
}
return bmp;
}
static ColorCollection GetColors(Bitmap bmp)
{
ColorCollection ret = new ColorCollection();
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
Color clr = bmp.GetPixel(x, y);
ret.AddColor(clr);
}
}
return ret;
}
static List<Pixel> GetPixels(Bitmap bmp)
{
List<Pixel> ret = new List<Pixel>();
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
Color clr = bmp.GetPixel(x, y);
ret.Add(new Pixel(x, y, clr));
}
}
return ret;
}
}
}
\$\endgroup\$
0
\$\begingroup\$
I decided to try using a very similar algorithm as my other answer, but only swapping 2x2 blocks of pixels instead of individual pixels. Unfortunaely, this algorithm adds an additional constraint of requiring the image dimensions to be divisible by 2, which makes the raytraced spheres unusable unless I change the sizes.
I really like some of the results!
American Gothic with river palette:

Mona Lisa with American Gothic palette:

Mona Lisa with river palette:

I tried 4x4 also, and here are my favorites!
Starry Night with the Scream palette:

Mona Lisa with American Gothic palette:

The Scream with the Mona Lisa palette:

American Gothic with the Mona Lisa palette:

\$\endgroup\$
2
\$\begingroup\$
C#
This is really really really slow, but it does a great job, especially when using the raytraced spheres palette.

The Scream palette:

Mona Lisa palette:

American Gothic palette:

River palette:

The Starry Night palette:

class Program
{
class Pixel
{
public int x;
public int y;
public Color color;
public Pixel(int x, int y, Color color)
{
this.x = x;
this.y = y;
this.color = color;
}
}
static Pixel BaselineColor = new Pixel(0, 0, Color.FromArgb(0, 0, 0, 0));
static void Main(string[] args)
{
string sourceDirectory = "pic" + args[0] + ".png";
string paletteDirectory = "pic" + args[1] + ".png";
using (Bitmap source = Bitmap.FromFile(sourceDirectory) as Bitmap)
{
List<Pixel> sourcePixels = GetPixels(source).ToList();
LinkedList<Pixel> palettePixels;
using (Bitmap palette = Bitmap.FromFile(paletteDirectory) as Bitmap)
{
palettePixels = GetPixels(palette) as LinkedList<Pixel>;
}
if (palettePixels.Count != sourcePixels.Count)
{
throw new Exception("OH NO!!!!!!!!");
}
sourcePixels.Sort((x, y) => GetDiff(y, BaselineColor) - GetDiff(x, BaselineColor));
LinkedList<Pixel> newPixels = new LinkedList<Pixel>();
foreach (Pixel p in sourcePixels)
{
Pixel newPixel = GetClosestColor(palettePixels, p);
newPixels.AddLast(newPixel);
}
foreach (var p in newPixels)
{
source.SetPixel(p.x, p.y, p.color);
}
source.Save("Out" + args[0] + "to" + args[1] + ".png");
}
}
private static IEnumerable<Pixel> GetPixels(Bitmap source)
{
List<Pixel> newList = new List<Pixel>();
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
newList.Add(new Pixel(x, y, source.GetPixel(x, y)));
}
}
return newList;
}
private static Pixel GetClosestColor(LinkedList<Pixel> palettePixels, Pixel p)
{
Pixel minPixel = palettePixels.First();
int diff = GetDiff(minPixel, p);
foreach (var pix in palettePixels)
{
int current = GetDiff(pix, p);
if (current < diff)
{
diff = current;
minPixel = pix;
if (diff == 0)
{
return minPixel;
}
}
}
palettePixels.Remove(minPixel);
return new Pixel(p.x, p.y, minPixel.color);
}
private static int GetDiff(Pixel a, Pixel p)
{
return GetProx(a.color, p.color);
}
private static int GetProx(Color a, Color p)
{
int red = (a.R - p.R) * (a.R - p.R);
int green = (a.G - p.G) * (a.G - p.G);
int blue = (a.B - p.B) * (a.B - p.B);
return red + blue + green;
}
}
\$\endgroup\$
\$\begingroup\$
Java - Another Mapping Approach
Edit 1: After that was shared in a "maths" environment on G+, we all seem to use matching approaches with various ways to circumvent complexity.
Edit 2: I messed up the images in my google drive and restarted, so the old links do not work any more. Sorry, I am still working on more reputation for more links.
Edit 3: Reading the other posts I got some inspirations. I got the programm faster now and reinvested some CPU time, to do some changes depending on target image location.
Edit 4: New program version. Faster! Special treatment of both areas with sharp corners and very smooth changes (helps a lot with the ray tracing, but gives the Mona Lisa occasional red eyes)! Ability to generate intermediate frames from animations!
I really loved the idea and Quincunx solution kind of intrigued me. So I thought I might well be able add my 2 Euro cent.
Idea was, that we obviously need a (somehow close) mapping between two colour palettes.
With this idea I spent the first night trying to tweek a stable marriage algorithm to run fast and with the memory of my PC on 123520 candidates. While I got into the memory range, I found the runtime problem unsolvable.
Second night I decided to just further and dive into the Hungarian Algorithm which promised to provide even approximation properties, i.e. minimum distance between colors in either image. Fortunately I found 3 ready to plug Java implementations of this (not counting many semi finished student assignments which start to make it really hard to google for elementary algorithms). But as one might have expected, Hungarian Algorithms are even worse in terms of running time and memory usage. Even worse, all 3 implementations I tested, returned occasional wrong results. I shiver when I think of other programms, which might be based on these.
Third approach (end of second night) was easy, quick, fast and after all not that bad: Sort colours in both images by luminosity and simple map by ranking, i.e. darkest to darkest, second darkest to second darkest. This immediately creates sharp looking black and white reconstruction with some random colour sprayed around.
*Approach 4 and final so far (morning of second night) starts with the above luminosity mapping and adds local corrections to it by applying Hungarian algorithms to various overlapping sequences of pixels. This way I got better mapping and worked around both the complexity of the problem and the bugs in the implementations.
So here is some Java code, some parts might look similar to other Java code posted here. The hungarian used is a patched version of John Millers originally in the ontologySimilariy project. This was way fastest I found and showed the fewest bugs.
import java.awt.image.BufferedImage;
import java.io.File;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Set;
import java.util.HashSet;
import java.util.Map;
import java.util.HashMap;
import java.util.List;
import javax.imageio.ImageIO;
/**
*
*/
public class PixelRearranger {
private final String mode;
public PixelRearranger(String mode)
{
this.mode = mode;
}
public final static class Pixel {
final BufferedImage img;
final int val;
final int r, g, b;
final int x, y;
public Pixel(BufferedImage img, int x, int y) {
this.x = x;
this.y = y;
this.img = img;
if ( img != null ) {
val = img.getRGB(x,y);
r = ((val & 0xFF0000) >> 16);
g = ((val & 0x00FF00) >> 8);
b = ((val & 0x0000FF));
} else {
val = r = g = b = 0;
}
}
@Override
public int hashCode() {
return x + img.getWidth() * y + img.hashCode();
}
@Override
public boolean equals(Object o) {
if ( !(o instanceof Pixel) ) return false;
Pixel p2 = (Pixel) o;
return p2.x == x && p2.y == y && p2.img == img;
}
public double cd() {
double x0 = 0.5 * (img.getWidth()-1);
double y0 = 0.5 * (img.getHeight()-1);
return Math.sqrt(Math.sqrt((x-x0)*(x-x0)/x0 + (y-y0)*(y-y0)/y0));
}
@Override
public String toString() { return "P["+r+","+g+","+b+";"+x+":"+y+";"+img.getWidth()+":"+img.getHeight()+"]"; }
}
public final static class Pair
implements Comparable<Pair>
{
public Pixel palette, from;
public double d;
public Pair(Pixel palette, Pixel from)
{
this.palette = palette;
this.from = from;
this.d = distance(palette, from);
}
@Override
public int compareTo(Pair e2)
{
return sgn(e2.d - d);
}
@Override
public String toString() { return "E["+palette+from+";"+d+"]"; }
}
public static int sgn(double d) { return d > 0.0 ? +1 : d < 0.0 ? -1 : 0; }
public final static int distance(Pixel p, Pixel q)
{
return 3*(p.r-q.r)*(p.r-q.r) + 6*(p.g-q.g)*(p.g-q.g) + (p.b-q.b)*(p.b-q.b);
}
public final static Comparator<Pixel> LUMOSITY_COMP = (p1,p2) -> 3*(p1.r-p2.r)+6*(p1.g-p2.g)+(p1.b-p2.b);
public final static class ArrangementResult
{
private List<Pair> pairs;
public ArrangementResult(List<Pair> pairs)
{
this.pairs = pairs;
}
/** Provide the output image */
public BufferedImage finalImage()
{
BufferedImage target = pairs.get(0).from.img;
BufferedImage res = new BufferedImage(target.getWidth(),
target.getHeight(), BufferedImage.TYPE_INT_RGB);
for(Pair p : pairs) {
Pixel left = p.from;
Pixel right = p.palette;
res.setRGB(left.x, left.y, right.val);
}
return res;
}
/** Provide an interpolated image. 0 le;= alpha le;= 1 */
public BufferedImage interpolateImage(double alpha)
{
BufferedImage target = pairs.get(0).from.img;
int wt = target.getWidth(), ht = target.getHeight();
BufferedImage palette = pairs.get(0).palette.img;
int wp = palette.getWidth(), hp = palette.getHeight();
int w = Math.max(wt, wp), h = Math.max(ht, hp);
BufferedImage res = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
int x0t = (w-wt)/2, y0t = (h-ht)/2;
int x0p = (w-wp)/2, y0p = (h-hp)/2;
double a0 = (3.0 - 2.0*alpha)*alpha*alpha;
double a1 = 1.0 - a0;
for(Pair p : pairs) {
Pixel left = p.from;
Pixel right = p.palette;
int x = (int) (a1 * (right.x + x0p) + a0 * (left.x + x0t));
int y = (int) (a1 * (right.y + y0p) + a0 * (left.y + y0t));
if ( x < 0 || x >= w ) System.out.println("x="+x+", w="+w+", alpha="+alpha);
if ( y < 0 || y >= h ) System.out.println("y="+y+", h="+h+", alpha="+alpha);
res.setRGB(x, y, right.val);
}
return res;
}
}
public ArrangementResult rearrange(BufferedImage target, BufferedImage palette)
{
List<Pixel> targetPixels = getColors(target);
int n = targetPixels.size();
System.out.println("total Pixels "+n);
Collections.sort(targetPixels, LUMOSITY_COMP);
final double[][] energy = energy(target);
List<Pixel> palettePixels = getColors(palette);
Collections.sort(palettePixels, LUMOSITY_COMP);
ArrayList<Pair> pairs = new ArrayList<>(n);
for(int i = 0; i < n; i++) {
Pixel pal = palettePixels.get(i);
Pixel to = targetPixels.get(i);
pairs.add(new Pair(pal, to));
}
correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.b - p1.d*p1.from.b));
correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.r - p1.d*p1.from.r));
// generates visible circular artifacts: correct(pairs, (p1,p2) -> sgn(p2.d*p2.from.cd() - p1.d*p1.from.cd()));
correct(pairs, (p1,p2) -> sgn(energy[p2.from.x][p2.from.y]*p2.d - energy[p1.from.x][p1.from.y]*p1.d));
correct(pairs, (p1,p2) -> sgn(p2.d/(1+energy[p2.from.x][p2.from.y]) - p1.d/(1+energy[p1.from.x][p1.from.y])));
// correct(pairs, null);
return new ArrangementResult(pairs);
}
/**
* derive an energy map, to detect areas of lots of change.
*/
public double[][] energy(BufferedImage img)
{
int n = img.getWidth();
int m = img.getHeight();
double[][] res = new double[n][m];
for(int x = 0; x < n; x++) {
for(int y = 0; y < m; y++) {
int rgb0 = img.getRGB(x,y);
int count = 0, sum = 0;
if ( x > 0 ) {
count++; sum += dist(rgb0, img.getRGB(x-1,y));
if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x-1,y-1)); }
if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x-1,y+1)); }
}
if ( x < n-1 ) {
count++; sum += dist(rgb0, img.getRGB(x+1,y));
if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x+1,y-1)); }
if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x+1,y+1)); }
}
if ( y > 0 ) { count++; sum += dist(rgb0, img.getRGB(x,y-1)); }
if ( y < m-1 ) { count++; sum += dist(rgb0, img.getRGB(x,y+1)); }
res[x][y] = Math.sqrt((double)sum/count);
}
}
return res;
}
public int dist(int rgb0, int rgb1) {
int r0 = ((rgb0 & 0xFF0000) >> 16);
int g0 = ((rgb0 & 0x00FF00) >> 8);
int b0 = ((rgb0 & 0x0000FF));
int r1 = ((rgb1 & 0xFF0000) >> 16);
int g1 = ((rgb1 & 0x00FF00) >> 8);
int b1 = ((rgb1 & 0x0000FF));
return 3*(r0-r1)*(r0-r1) + 6*(g0-g1)*(g0-g1) + (b0-b1)*(b0-b1);
}
private void correct(ArrayList<Pair> pairs, Comparator<Pair> comp)
{
Collections.sort(pairs, comp);
int n = pairs.size();
int limit = Math.min(n, 133); // n / 1000;
int limit2 = Math.max(1, n / 3 - limit);
int step = (2*limit + 2)/3;
for(int base = 0; base < limit2; base += step ) {
List<Pixel> list1 = new ArrayList<>();
List<Pixel> list2 = new ArrayList<>();
for(int i = base; i < base+limit; i++) {
list1.add(pairs.get(i).from);
list2.add(pairs.get(i).palette);
}
Map<Pixel, Pixel> connection = rematch(list1, list2);
int i = base;
for(Pixel p : connection.keySet()) {
pairs.set(i++, new Pair(p, connection.get(p)));
}
}
}
/**
* Glue code to do an hungarian algorithm distance optimization.
*/
public Map<Pixel,Pixel> rematch(List<Pixel> liste1, List<Pixel> liste2)
{
int n = liste1.size();
double[][] cost = new double[n][n];
Set<Pixel> s1 = new HashSet<>(n);
Set<Pixel> s2 = new HashSet<>(n);
for(int i = 0; i < n; i++) {
Pixel ii = liste1.get(i);
for(int j = 0; j < n; j++) {
Pixel ij = liste2.get(j);
cost[i][j] = -distance(ii,ij);
}
}
Map<Pixel,Pixel> res = new HashMap<>();
int[] resArray = Hungarian.hungarian(cost);
for(int i = 0; i < resArray.length; i++) {
Pixel ii = liste1.get(i);
Pixel ij = liste2.get(resArray[i]);
res.put(ij, ii);
}
return res;
}
public static List<Pixel> getColors(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
List<Pixel> colors = new ArrayList<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
colors.add(new Pixel(img, x, y));
}
}
return colors;
}
public static List<Integer> getSortedTrueColors(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
List<Integer> colors = new ArrayList<>(width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
colors.add(img.getRGB(x, y));
}
}
Collections.sort(colors);
return colors;
}
public static void main(String[] args) throws Exception {
int i = 0;
String mode = args[i++];
PixelRearranger pr = new PixelRearranger(mode);
String a1 = args[i++];
File in1 = new File(a1);
String a2 = args[i++];
File in2 = new File(a2);
File out = new File(args[i++]);
//
BufferedImage target = ImageIO.read(in1);
BufferedImage palette = ImageIO.read(in2);
long t0 = System.currentTimeMillis();
ArrangementResult result = pr.rearrange(target, palette);
BufferedImage resultImg = result.finalImage();
long t1 = System.currentTimeMillis();
System.out.println("took "+0.001*(t1-t0)+" s");
ImageIO.write(resultImg, "png", out);
// Check validity
List<Integer> paletteColors = getSortedTrueColors(palette);
List<Integer> resultColors = getSortedTrueColors(resultImg);
System.out.println("validate="+paletteColors.equals(resultColors));
// In Mode A we do some animation!
if ( "A".equals(mode) ) {
for(int j = 0; j <= 50; j++) {
BufferedImage stepImg = result.interpolateImage(0.02 * j);
File oa = new File(String.format("anim/%s-%s-%02d.png", a1, a2, j));
ImageIO.write(stepImg, "png", oa);
}
}
}
}
Current running time is 20 to 30 seconds per above image pair, but there are plenty of tweaks to make it either go faster or maybe get a bit more quality out of it.
Seems like my newbie reputations does not suffice for such many links/images, so here is a textual shortcut to my Google drives folder for image samples: https://drive.google.com/drive/folders/0B_nbjQohON5wQ0xQM0E4XzFMYWc
The samples I wanted to show first:
People -> Mona Lisa https://i.sstatic.net/zOh1Nh05.png
The programm keeps track of all the point coordinates, but I feel exhausted now and do not plan to do animations for now. If I was to spend more time I might do a Hungarian algorithm myself or tweek the local optimization schedule of my programm.
\$\endgroup\$
Explore related questions
See similar questions with these tags.
