The full explanation is kind of hard to communicate, but there will always be issues mapping a bunch of square voxels onto a sphere. This is due to laws of mathematics and geometry, and cannot be overcome by clever programming. You must always make some tradeoff, either by sacrificing the continuous voxel map and making sections that don't line up with each other (e.g. dodecaplanets) or sacrifice the sphere in order to have a seamless grid of voxels.
This is the kind of thinking that got us into this mess... instanced planets with transitions gives us a pass on many of these issues because it changes how mapping can be viewed, especially from an immersion standpoint. We can use square mapping even though it isn't exact, it doesn't actually matter. Use Mercator mapping to map this:
onto a sphere like this visually
The world generated is completely square, but the backwards Mercator maps to the sphere display. You can consistently figure out where you will land, what will hit where, and have convincing views of the planet from a distance and still be able to understand the topology with out actually having to load into the instance. The Mercator sphere projection could even be updated periodically given sufficient changes to the planets surface or something. Things will be warped, but you won't notice it because of the instance transition. Use pacman wrapping for all angles with surface movement.
The nice thing about this solution is that it could be implemented *today* and we know how to do it.
1: use perlin noise to generate the square hieght map, biomes, ores and caves for a square map (with depth) representing the planets surface, Theoretically the game already uses perlin/simplex noise.
2: Generate UV maps of texture surface yourself, or cheat and use
Blender to do it for you before hand. The texture changes but the UV maps don't.
3: Sample surface of generated planet for the resolution of the texture you want to use as the UV texture, use the top most filled in block for the block color of the texture (so if your planet generated terrain is 65536 x 65536 pixels, but your texture is 1024 x 1024, sample every 64th x and y coordinate at the max z level where a block exists to get the color for that spot, or use an average of all colors in that area for more accurate results). This is then saved as the planets texture.
4: to land and leave the planet (and potentially to also manually sample surface) use the equations provided here
Mercator Projection -- from Wolfram MathWorld to map incoming latitude and longitude to rectangular Mercator coordinate on the actual planet surface. These equations biol down to:
given y axis longitude = lng_0;
current longitude = lng;
and current latitude = lat;
x = lng - lng_0;
y = ln(tan(lat) + sec(lat));
lat = 2 * arctan(e ^ y) - (1/2)*pi;
lng = x + lng_0;
So you can figure out from this where you'll end up and where you will be on the planet given an outgoing x y, and incoming lat lng.
5: To optionally have updated planet visuals from space, record changes per chunk (of generated planet surface), keep track of top level blocks on each chunk to figure out new color, chunks can then update the texture when large enough changes happen.
The biggest issue I see with this is that very large structures will appear warped on with this method from space. If planets are big enough this won't even be an issue since you could never discern that much any way. Continents and major land features will also, but you wouldn't be able to tell from the ground because you are up close to them. This applies to very large man made structures some what, but if you make a square it won't look like a square from space, the regular-ness of the structure will give the warping away.
The biggest advantages of this are its simplicity, the solved ability to correspond space actions with ground actions and vice versa, and the update-ability of the planets surface changes.