February 24th, 2021
started this document.
i've had a bunch of other ways of tracking progress for various programming projects, but those ended up being scattered across a half-dozen websites and not backed up, so that as sites vanished into the ether i ended up losing all the update posts. so this is something a bit less brittle, hopefully.
a brief summary of development so far: c. 2014 i worked on some 3d haskell game code, using what was effectively OpenGL 1 (fixed pipeline, rudimentary VBOs), but i never got a good game loop working and the entire project was kind of a mess. i eventually abandoned it with the opinion that maybe haskell rendering was bad and would never work.
in november of 2017 i started working on "two-week projects": code projects that i'd try to focus on for two weeks, before dropping and working on something else. this was mostly deep 'library code', rather than things like small games. over time i started building up projects i'd elaborate on and expands, including things like haskell rendering (using gpipe
) and FRPevent systems (using reactive-banana
).
plenty of other projects weren't haskell at all, but ultimately a core of haskell code started to come together into something approaching an 'engine'. in the last few months i've been coding up some of the final core parts -- interactive UI, picking, loading, render management code, that kind of thing -- and now, while there are still large missing chunks (no audio, no model or animation support, minimal serializing code, etc) there's enough in place to start thinking about actual gameplay considerations: things like "what is the world going to look like", or "what kind of interactions should the player have with the world".
(many many years ago, when i was a kid and couldn't program at all, i read a bunch of stuff decrying 'idea guys', those people that appear on web forums and say, well i have a great idea for a game. i'll just need an artist to make all the art, and a programmer to code it all up, and a writer to actually write all the details, etc. these people contribute minimally to a project and want everybody else to do the work of enacting their vision, and at the time i was like "i want to never be like this, so i'll focus on actually coding stuff up and think about the fun game idea stuff later". ultimately i think that's kind of hamstrung my game designer abilities, since i've focused so much on low-level coding stuff rather than stuff like mechanics and interactions, so now i draw a blank whenever i have to think about gameplay specifics. i haven't posted much about any of the specific plot and setting concepts i have for this game, even though i do have a lot, in part because it felt incredibly self-aggrandizing to lay all the cool ideas out there when i have absolutely none of them implemented. but going forward, especially now that i basically have to work on the gameplay ideas aspects of this, some of this work is going to be simply laying all my ideas out there and trying to arrange them into concrete mechanics and interactions, which i still have a hard time considering to be "work", even though it's actually a critically important part of game design.)
that's the general overview. i'm specifically keeping this log so that i have something more specific to refer back to in the future -- "what was i working on in february?", because i immediately forget what i've done in the past month unless i write it down somewhere. i've updated previous iterations of this kind of logfile every few days, and for the past two years i've written blogposts summarizing my two-week projects every two weeks / every month (with some large gaps. it is COVID times after all.), so i'm not entirely sure how often i'll update this yet. but ideally more frequently than every two weeks.
February 26th, 2021
assembled the rest of the website. since i didn't really have any centralized development stuff, it's maybe worth making a 'this is where development is at' post before i start working on things again.
this is what the 'world map' screen looks like currently. in the past i've made some prettier-looking maps, but when i recoded everything i mostly discarded that code -- this is the thing i'm planning on working on next. currently, a "worldmap graph" is generated, and this is a simple debug render of nodes and edges. both the graph generation and how it's used to render a worldmap are things that need extensive improvement going forward, but for right now i'd accept being able to just make a continuous heightmap for the worldmap based on the present biome nodes.
(old worldmap renders looked like this or this, for reference.)
this is what the 'in-game' screen looks like currently. the worldgen here is completely disconnected from the world map generation -- it's just some hardened sand with sloping sandy valleys between them, forever. currently a bunch of the lighting normals are busted, which is why the ground looks sharply rippled. right now, all you can do is walk around and the world will load in around you (which currently really destroys the framerate), but there's no way to interact with anything or do anything. there's also no 'distant landmass' rendering; all you see are the nearby loaded chunks
there's some other stuff (simple UI, some shader stuff, etc) going on, but the bulk of the 'game' is here. right now, with some worldgen happening and the ability to pick tiles, there's no reason why i couldn't start adding in 'game mechanics' and try to make an actual playable game, but before i'm totally ready for that i'd like to fix up some of the engine issues. right now, i'm vaguely focused on better worldgen, both structure and rendering, and on things like overlapping height values in the main game -- one of the things i added in recently was support for overhangs, but it's very rudimentary, and i'd like to finish+extend that code to support things like buildings or boats or caves. once all that's in place, i'll have basically no excuse not to be working on game mechanics.
February 27th, 2021
what i started working on first of all is the 'ctm' support. this is named after the minecraft mod -- "ConnectedTexturesMod", originally designed to let people have seamless glass instead of blocky glass, and extended to support a whole bunch of other features. i play minecraft using a texture pack that uses a bunch of CTM stuff, and there are packs like dawnbreaker that use even more advanced CTM features to really expand their visual style.
i'm kind of fond of pointing out that vagrant story is equivalent in texture sizes to a 32x32 minecraft modpack -- like minecraft, vagrant story rooms are broken up into cubic chunks about half the size of a person in height, with optional half-blocks, and if you actually look at the textures up close, they're 32x32 per tile. vagrant story, however, looks gorgeous, while minecraft looks... distinctive, at the best of times. obviously a lot of that has to do with level geometry as well as textures -- vagrant story has slopes, but vagrant story also has unique, non-repeating textures for nearly all of its environments where minecraft has vast tracks of repetitive tiles, and an advanced CTM system can make up for a lot of the shortcomings of using a small set of tiles rather than uniquely painting every surface.
the CTM data model, however, is a nightmare. the most recent one has been simplified and merged into minecraft's .mcmeta
format, with an example data file here, and it's not terrible. older versions, though...
the main issue is, if you want to have two different CTM methods active, you are in for a world of hurt -- and having interleaving multiple methods is the only way to actually break up the visual character of all these repeating tiles.
the fundamental issue here is a combinatorial one: if you have six basic transforms -- ctm
, random
, pattern
, horizontal
, vertical
, and overlay
-- it's not hard to write those 6 directly. but combining them together produces 30 unique combinations. the CTM mod just doesn't have the flexibility to define these transforms layering on top of each other programmatically, so it just makes ad-hoc layerings: the older version has overlay_*
versions of its basic methods, which do the thing within the context of its overlay model. there's also vertical+horizontal
and horizontal+vertical
, which are a different-priority mixing of horizontal
and vertical
. outside of those hardcoded combinations, if you want to mix-and-match methods (which, to be clear, isn't particularly unusual -- imagine having a tile that you want to have a big 2x2 pattern across it, with random variations for each quarter) you have to go through this incredibly convoluted procedure: in the old version, you need to write the first method match, and have it rewrite the tile name to a dummy, debug tile label. then you need to write a separate match file on that dummy label and have its method produce your actual output. i don't know if it's even possible to do that in the new format.
this is where haskell comes into the picture. when i was coding up my own basic CTM support -- only handling random
and pattern
-- i wrote it up like this:
data Texturing
= Random [(Float, String)]
| Pattern Int Int [String]
so you have your texture paths and those are associated with either one or the other. but when i was thinking about if this type 'made sense', if it happened to correspond to anything that haskell had a typeclass about, i went through a series of steps. first, it's easy enough to make this an instance of Functor
:
data Texturing a
= Random [(Float, a)]
| Pattern Int Int [a]
instance Functor Texturing where
fmap f t = case t of
Random was = Random $ (fmap . fmap) f was
Pattern w h as = Pattern w h $ fmap f as
which, sure. now you can do things like fmap through it to turn a file path into UV coords, or anything else really, instead of being stuck with only file paths. except... in practice, if this is a bunch of file paths, you'd also need to load up the images, which would mean going through the entire texture value and loading all the file paths in turn. so that would be a Foldable
instance, to just extract all the paths to load, like foldMap pure :: Texturing String -> [String]
, or a Traversable
instance, which would let you transform the texturing type itself without losing track of what's mapped to what else, like traverse loadImage :: Texturing String -> Texturing Image
. and it turns out this type can already do that:
instance Foldable Texturing where
foldMap f t = case t of
Random was -> (foldMap . foldMap) f was
Pattern as -> foldMap f as
instance Traversable Texturing where
traverse f t = case t of
Random was -> Random <$> (traverse . traverse) f was
Pattern w h as -> Pattern w h <$> traverse f as
so that's neat, and useful. is there anything else? well, it's not really an applicative, because where you could kludge it to make pure = Random . pure
, there's not any way to write <*>
. except... thinking about it, <*>
corresponds exactly to that thing i mentioned above: applying one method after another. that would be really useful, actually!
this requires a slight tweak to the type:
data Texturing a
= Fixed a
| Random [(Float, Texturing a)]
| Pattern Int Int [Texturing a]
; note how there's now a 'base case' in Fixed
, whereas Random
and Pattern
are now 'recursive', in that they store another Texturing
value. so now it's possible to store Pattern
values inside of Random
values, and vice versa, and they would do different things: one would be a pattern that would have random variations within it, and the other would be a random selection between different patterns. this leads to some different instances:
instance Functor Texturing where
fmap f t = case t of
Fixed a -> Fixed $ f a
Random was -> Random $ (fmap . fmap . fmap) f was
Pattern w h as -> Pattern w h $ (fmap . fmap) f as
instance Foldable Texturing where
foldMap f t = case t of
Fixed a -> f a
Random was -> (foldMap . foldMap . foldMap) f was
Pattern _ _ as -> (foldMap . foldMap) f as
instance Traversable Texturing where
traverse f t = case t of
Fixed a -> Fixed <$> f a
Random was -> Random <$> (traverse . traverse . traverse) f was
Pattern w h as -> Pattern w h <$> (traverse . traverse) f as
instance Applicative Texturing where
pure a = Fixed a
tf <*> ta = case tf of
Fixed f -> f <$> ta
Random wfs -> Random $ second (<*> ta) <$> wfs
Pattern w h fs -> Pattern w h $ (<*> ta) <$> fs
and then you can mix-and-match those two methods in any way you want. which is currently just two ways. but the nice thing about this is that it's extremely extensible: when i actually add in proper connected textures, it'll just be a new constructor for the Texturing
type, and it'll just require a single new line for each of those instances, which won't be any more complicated than the existing ones. i'm planning on adding proper connected textures, and overlay support, as well as some more esoteric stuff like wang tilings -- all of those can fit right into this type, and be combined in arbitrary ways to arbitrary depths without any problem.
so that's neat!
that being said, this is just a data type -- outside the realm of pure mathematics, there's a bunch of messy implementation details.
there's a little more code that goes into, for example, getting a tile's position and using that to determine what pattern texture is correct for that coord, or generating a hash from that coordinate value to determine which of the random textures to pick. and it's this part of the code that i dug into some today.
when i render tile surfaces, it's very easy to determine their coordinate: there's a big trihex grid and i've already worked out all the coordinate math to correctly place all the hexes and tris. but when there's a vertical wall between tiles... is it one or the other? neither is actually very useful, since a repeating pattern across a wall should obey different rules than a repeating texture across the floor or ceiling. also, if i want to add in proper connected textures at some point, part of the information i need to give to each tile for it to be textured correctly is the state of its adjacent tiles -- are they something it should tile with, or not?
there's a bunch of internal type and data management i need to do before all of that can work, and restructuring those types is most of what i did today. and part of doing that is handling what tiles even are: internally, tiles are stored as a raw numeric id, and previously i had a bunch of ad-hoc hardcoded functions that returned information on what a given id corresponded to. now there's a proper material index that everything uses, which is a big step towards having dynamic material types. and, since the CTM methods have different rules for what constitutes an 'adjacent tile' (is it the exact same material? is it the same material class? etc.) i need some of that code to be in place before i add in proper connected tiles.
so anyway in the next few days hopefully i'll have full connected textures to show off, which would be a nice change.
March 1st, 2021
more work on connected textures. first i had to restructure the render code a little to calculate the adjacent tile information, which lead to some predictable issues. after that, i hooked together several systems: i changed the rendering code to calculate the adjacent tile borders; i added a new Connected
type constructor for the texturing type; and i updated the texturing property parser to add in new syntax for connected textures.
this code still doesn't work right though, because there's another factor in play: there are theoretically 64 distinct edge-configurations for hexes (26, since there are two options across six sides). most of those are rotations or mirrors of each other; there are only 13 distinct textures. but my texturing code currently can't handle procedural rotations or mirrorings of textures. so right now i have a bunch of the code written, but it's really not usable yet.
there are two issues: one is that i don't have any clear, uniform mapping between edges. quick: you have a hexagonal tile at coordinate x,y with edges numbered 0 through 5. what's the triangular coordinate that's edge-adjacent with edge 2? which of its edges matches the hexagon's edge? what about the other five sides of the hexagon? that's rhetorical, because the actual answer is "it's underspecified"; you have to pick an ordering and numbering scheme. i've picked one out, but it's not really encoded in the way where i can easily get the answer to that question, yet, which means when i wrote the 'what are the adjacent tiles to this hex, and is that connection smooth or not' code i kind of haphazardly guessed some numbers that aren't correct. that's the first problem, that i'm not selecting the correct textures because the adjacency values aren't correct. the second problem is that even if i had the correct textures, i wouldn't be rendering them properly, because some of them should be rotated or mirrored.
still, this is a decent chunk of progress: calculating something incorrect at least means i have enough in place to be calculating something and returning a result, even if it's totally wrong.
so going forward, hopefully i'll fix those two issues and get proper connected textures working! and also draw some better textures, since i am definitely not an artist.
March 3rd, 2021
today: more slogging through the coordinate mines. the main issue here is just that things are very _fiddly_: i need to line up all these equations until they match. so i have the hex/trihex coordinate math, which internally gives each hex vertex an index and then constructs edges (with their own indices) between them. tri vertices, meanwhile, are generally referred to as a
, b
, and c
, and there are two different kinds of tris, because there are 'up-pointing' and 'down-poiting' tris in the grid, and those have slightly different characteristics.
so in addition to that existing code, i've added in a bunch of adjacency code between hexes, which means
- using the base hex/tri code (which could be incorrect)
- to calculate the intermediate tri values (which i could be doing incorrectly),
- lining the edges up in a useful way (which i might not be doing),
- connecting their edges to hex edges (which i might not be doing correctly),
- and then using the result to check a connected-textures lookup table (which i might not have constructed correctly)
- to match a specific texture with a specific rotation (which i might not be applying the rotation correctly)
- to finally get a drawn texture (that might not be drawn at the right orientation)
for a lot of math problems, you can kind of perturb the equations and think a little and perturb them again and eventually end up in a situation with an equation that works. this is generally a little dangerous to do without considering your code in its entirety -- it's very easy to write one equation that flips something upside down accidentally, and then write a second equation in relation to the first that gets the 'correct' result by flipping things upside-down again, and so you end up with an overcomplicated set of code that does some unnecessary transformations and is difficult to fix in the future (since if you fix one equation, it breaks something, and unless you know for sure it's actually a "fix" and not a "new bug" it can be difficult to figure out exactly where the other transform is happening, since these generally aren't placed right next to each other or on the same kind of data).
the sheer number of layers involved in this code make it unrealistic to do anything other than work out, from geometric principles, exactly what transforms are necessary. that involves an awful lot of involved coordinate math, though.
anyway so that's what i did today. most of what i did was in the 'lining the edges up in a useful way' level, with a brief jaunt down to 'base hex/tri' code to fix an ordering bug in some of the adjacent-triangle code, as well as jumping up to 'match a specific texture with a specific rotation', because for a while i was applying the rotation clockwise instead of counterclockwise, and then a single edit to a triangle texture because i had drawn the edge lines along the wrong edge of the texture.
(again, it's worth noting that these equations aren't really 'objectively incorrect' when they're wrong -- if i had wanted to, i could have changed the texture mapping so that that texture was correct -- so much as 'not in line with other equations'. so by pinning down the basic coordinate math as 'definitely correct' that kind of pushes information flow outward to make things correct or incorrect in relation to them. truth is a messy concept even in the realm of pure mathematics.)
so anyway where i'm at today is that some of the values are being calculated correctly, but others are not. there's this remaining issue where... i think in that big list of things-that-could-go-wrong, the step that's currently broken is "calculating intermediate tri values"; those maybe aren't being calculated with the correct hexes in all cases.
also i really need to draw some better textures. maybe once i get all the code working.
March 4th, 2021
more progress with connected textures. they now work! so that's neat.
the remaining issues turned out to be: i was creating the tris incorrectly, so hexes were getting the wrong edges, and then for the one hex tile that involves getting mirrored as well as rotated, i was applying the wrong kind of rotation for how i was mirroring it. with those two issues fixed, all the connected texture code now appears to work.
i also drew a new surface tile for the cracked sand, since i never really felt like the old tile was very good. this one is a little better.
so yeah, that's neat! it's much easier to see the edges of hex sections now. also, this adjacency data is the first step towards having some more complex wang tilings as part of the texturing setup, which is another major thing i'd like to try doing.
(now that this is done and i have the borders -> texture lookup table filled out correctly, i should probably change it to an actual lookup table instead what i have right now. there are a few more complex ways to handle the texture mapping, too -- there are 64 distinct hex tiles, and 8 distinct triangle tiles, and currently i have code to dynamically rotate and mirror them based on the 13 unique tiles. but there are sometimes reasons where you'd want to fill out some specific subset of those 64 -- like maybe a few specific tiles don't look right when rotated or mirrored, and so you want to put in overrides for those. that kind of thing isn't supported yet, but ideally would be in the future.)
the next step is... actually doing some of that worldgen rendering i mentioned at the start of this document. this whole connected textures thing was kind of a diversion, not that it didn't need to be done at some point. but ideally i'd be working on worldmap renders or something like that next.
March 30th, 2021
and then i took a break for a bit. specifically, i did all that ctm stuff and then immediately found a messy render-buffer index-counting bug (that as of writing this i still haven't fixed) and that really sapped my desire to work on this more.
i ended up throwing together a little unrelated idle game thing -- first this 'map generator' (loosely based on 'stop the darkness'), and then this (based more on 'theory of magic' and 'proto23') when i wanted an actual idle game, which might look very different when you click them now vs. how they looked when i posted this, depending on if i keep working on it or not. i mostly mention this here because depending on how much i dig into the idle game mechanics, some of that mechanics concepting might show up in this game. probably not.
anyway, after that i decided to get back to work on this, and part of that was focusing more strictly on actual gameplay elements. i've probably already mentioned it in this document, but over the years working on this project i've taken it from something incredibly bare-bones to something that's basically a limited 'engine', and at this point the only thing stopping me from starting to add in content is myself; there's plenty of hooks to add interactivity and game state to. but doing that requires a coherent idea of what the game is actually going to be. for a long time i've had this hazy 'everything' idea; you know the one: oh, just a game where you can do everything in. what are the technical limitations? don't think about it. but if i want to actually implement anything for real, it'll be a lot simpler if i sit down for a few weeks and make a fairly thorough outline of what the main game verbs should be and how they should be applied, so that i actually know what i'm going to be working on. this kind of thing never really feels like 'work' to me; it's more like "oh i sat around typing some stuff sometimes", but in practice i have noticed that it's much easier to code something up if you know exactly what you're doing beforehand, rather than figuring it out while you're coding.
so i guess that's what i'll be working on for the next while, or until i get tired of outlining things.
April 2nd, 2021
so i ended up making a list of a bunch of mechanics that need to be implemented, and how i plan on storing the data and handling the ui -- this isn't an exhaustive technical spec; it's more a statement of "no, i'm not doing that"; "yes, i am doing this and here's roughly how".
i ended up with eight categories for the most basic gameplay planned:
- plants
- time
- NPCs
- materials
- inventory
- items/tools
- storage
- constructions
some of these will take longer than others; 'time', for example, is mostly just reorganizing code that i already have and making a few new simple utility functions; the major thing conceptually is that i decided to use the sidereal day for internal time calculations, which removes a lot of the technical hassle of timekeeping at various different latitudes on a rotating planet (at the cost of making certain things like "is it day right now" less obvious from the raw time code). meanwhile, 'constructions' is a whole nest of new ui and new data structures and rendering systems and new collision code, etc. i'm gonna try working on each item for about two weeks each and see where i am after all that, so hopefully in four months i have a decent chunk of the basics actually coded up. we'll see.
i'm starting with 'storage', because i had some immediate thoughts on how that would work -- i just worked on a bunch of picking code, and i left notes there about how to further alter how picking works, and storage is mostly things like "placing items in chests" "placing items on racks and shelves"-- basically having objects in the game that you can interact with to take and place items -- which means that it's almost entirely a picking issue. since i don't have items yet, the interaction handlers won't do anything, but ideally when this is done i should have some fairly-freeform storage items.
today i coded up the absolute basics: storage objects will be a render polyhedra (that's actually drawn and uv-wrapped and displayed in the world) as well as a picking polyhedra (which is invisible and has action hotspots for its faces when you click them). they would also have some kind of internal state to keep track of their stored items, but, no items yet so that would be nothing. all this code doesn't run yet (i'd need to hook it into the chunk data as well as iron out the distinctions between an object and an instance of that object, plus do the restructuring of the picking code necessary to use picking polyhedra) but for once i actually have a fairly good idea of exactly what i need to code up, so that should make things much easier.
April 3rd, 2021
just a little more working on storage code. i hooked the storage type up to a general 'object' type, and i put that into the chunk data, and i hooked that up to a rendering function (that currently does nothing).
there's a big part of code that's just hooking things up to other things, so that the right stuff is surfaced at the right parts (this is what 'software engineering' is, as distinct from 'programming'). right here it's "object data is traversed over when a chunk render is generated", and the actual doing things bit is kind of incidental: the object rendering function does nothing; it immediately returns an empty set of vertices. but now all the stuff is in place so that when i do write the rendering function, it'll be called properly.
April 4th, 2021
more structuring the storage code. i actually wrote down the specifics of what exactly needs to change before all this is done (well, before all this is testable):
- replace
Data.HexPack.hitTile
with a different hit function that returns depth (and ideally solves the edge-overhang issue + returns more regular polyhedra data) - update
Main.worldmapGeometryPickRay
to call the new tile hit function +pickObjects
and select the closest target - write
Data.Object.renderObject
(use the instance's material value along with the template's polyhedra to generate a fully-renderable polyhedra model) - write
Data.Object.pickObjects
(actually do some picking for the various objects' picking models)- this will require a general-purpose "intersect a polyhedra with a ray and return the hit faces" function
- change
GameInfo.pickOverlayUpdate
to do something useful to store non-hex pick hits. this might involve making some betterGameObjectLabel
values that actually, you know, uniquely label game objects
- expand the game handler in
Main.loadNewGame
(it'sgameHandler
) to respond toPlayerActionTrigger
in a specific way when a non-hexGameObjectLabel
is picked. right now just correctly recognizing when clicks happen over an object's picking faces is enough- later on i guess
PlayerActionTrigger
should be changed out for left/right hand actions, or however else i want to do using objects / using held items, and that would change bothgameEvents
andgameHandler
& make clicking an object do something
- later on i guess
- actually give reasonable models + picking models to racks and shelves (& verify that chests work)
that's a little more specific than i usually get with code stuff. i've been generally trying to keep things at a layperson's level of understanding, so this, which a whole mess of technical stuff, is maybe a little alienating; who knows. the exciting thing about this for me is that... this is it? that's a list of specific functions (and sometimes specific types) that need to be written or updated, and then once that's all done storage stuff should work. usually things are a lot more nebulous, since i don't pin things down specifically. it's a nice feeling, but also who knows; some of those functions could end up to be really complicated to write. maybe not. i guess we'll see over the next week and a half.
April 5th, 2021
well, since i made a list yesterday, i guess today i'm updating it:
replaceData.HexPack.hitTile
with a different hit function that returns depth (and ideally solves the edge-overhang issue + returns more regular polyhedra data)- actually test this function w/ a worldgen that has overhangs (and in general)
- update
Main.worldmapGeometryPickRay
to call the new tile hit function + pickObjects and select the closest target - write
Data.Object.renderObject
(use the instance's material value along with the template's polyhedra to generate a fully-renderable polyhedra model) - write
Data.Object.pickObjects
(actually do some picking for the various objects' picking models. this needs a little restructuring to surface objects' actual positions)this will require a general-purpose "intersect a polyhedra with a ray and return the hit faces" function- test
Geometry.Lines.pointInPoly2d
(see how robust it is to rounding error near the ends / if you're actually constructing the line loop correctly) - test
Geometry.Plane.planeFromPoints
(is that * -1 needed after all?) - test
Geometry.Plane.onRay
(see if you can turn the equation soup into some transforms you recognize) - test
Data.Polyhedra.intersectWithRay
- test
- change
GameInfo.pickOverlayUpdate
to do something useful to store non-hex pick hits. this might involve making some betterGameObjectLabel
values that actually, you know, uniquely label game objects
- expand the game handler in
Main.loadNewGame
(it'sgameHandler
) to respond toPlayerActionTrigger
in a specific way when a non-hexGameObjectLabel
is picked. right now just correctly recognizing when clicks happen over an object's picking faces is enough- later on i guess
PlayerActionTrigger
should be changed out for left/right hand actions, or however else i want to do using objects / using held items, and that would change bothgameEvents
andgameHandler
& make clicking an object do something
- later on i guess
- actually give reasonable models + picking models to racks and shelves (& verify that chests work)
so, you know, mostly just wrote some geometry functions today. haven't tested much of anything yet.
April 11th, 2021
and then i got distracted by working on other projects, which is, as always, the main thing keeping me from making progress with any of them. but then today i got pinged on the haskell gamedev discord that somebody was working on a fork of GPipe, the rendering library i use for all of this, and i spent a bit of time being a tester for the new library. it would be really nice to have a more active developer -- the dev of the original library is now more of a maintainer, who's mostly been bumping version numbers as other packages update. the fork of gpipe, which i'm now using, has some alpha support for geometry shaders, which will be interesting to pick at later on. this also means i'll have to re-fix gpipe's texture reading code (and i guess put in a pull request for the actual repo), since that was something that was broken that i had to fix in gpipe itself. so that was all pretty exciting; not every day you migrate your two major rendering libraries to different versions.
April 12th, 2021
regrettably still not working on more storage stuff. whoops. but what i did do was work on better background loading. for a very long time, when you zoned into the map from the worldmap, the game would load the surrounding chunks and then never load any more chunks, so you could walk to the edge of the world without any issue. after a while i decided to fix that, and so i put together this big coroutine-based timed loading system (coroutine-based because all render actions need to happen in the same thread and that thread is the main thread) -- every frame there would be a time 'budget' for loading, and once that budget was up it would suspend loading and start again next frame.
(the main issue with loading is it requires allocating render buffers -- a very slow operation -- and doing a lot of buffer writes -- a much faster but still potentially slow operation. doing them all together would take up way more than a frame, so i'd need some way to pause the load so i could render a frame and handle input and soforth, and then resume again in the same place next frame. the ultimate goal for background loading was for it to be background; for it to be staggered enough to not meaningfully impact the framerate. this involved a bunch of optimizations, like saving pre-allocated buffers instead of freeing and reallocating them, and so forth.)
so previously i'd gotten 'background' loading working, but it was still incredibly slow, since i was slicing up the allocation/generation/render passes of landscape generation, but i wasn't slicing up inside the render pass -- the entire landscape buffer would have all of its vertices written in one big chunk, which took forever and absolutely destroyed the framerate. so what i did today was further restructure the loading code so i could slice up render buffer writes into a bunch of passes (think each one writing a single 'row' of tiles) that it could suspend between, and that meant it could actually do a bunch of loading per-frame without meaningfully impacting the framerate.
so that felt really nice! one of the biggest issues with the game remaining has just been the inability to really... move around. there's definitely still issues (i'm still not generating any 'distant land' geometry, so it's still very minecraft-like in that the view distance is like, 500 feet and you can distinctly see the chunks pop in) but this is a big improvement.
that being said, uh, there's still a pretty big bug where after a certain point the newly-loaded chunks aren't actually rendered in the right place and they end up overlapping and overwriting the existing world geometry. that's a big problem that i'm not sure the source of yet. but i'll figure it out eventually.
April 29th, 2021
and then i got caught up in other stuff and didn't work on this for the rest of the month. or, specifically, i got into this big mood of wanting to finish off projects instead of starting new ones, and did a bunch of writing, but then i stepped back a bit and considered how if i started only going for the low-hanging fruit of half-finished projects then i'd just be finishing half-written stories for a few years before getting into any code projects.
but also i was a little blocked on that issue i mentioned at the end of the last post -- newly-loaded chunks being corrupted in some way so they have the wrong geometry rendered. i think the root cause of that is some bad data management in how i've written my background-loader code, and that's something that will take some actual thought to fix, since it's one of the more complicated parts of the code. (and it's still only a theory that that's what's causing the problem; it could be a totally restructure my loading code and this bug still happens.)
anyway today i got back to work on that a little and started sketching out some of the things i'd need to change in order to get the rendering code fixed. hopefully i can get that fixed before too much more time passes. if nothing else, this is a good example of how a simple plan of "okay i'll work on each thing for two weeks" immediately goes off the rails. i just really want to be able to walk around the map.
April 30th, 2021
success! i restructured how background loading worked and that seems to have solved the issue. now i can walk for as far as i want and not run into corrupted geometry. though, the geometry generation is pretty slow and a big unordered right now, which means it's not very difficult to walk to the edge of the loaded world and watch chunks pop into existence. that's a performance issue, though, not a bug, so i guess i can wait on trying to improve that for a while.
i'm also not sure if the loading is totally correct & robust. i'm gonna have to do some testing to check exactly how chunks are loaded and unloaded and make sure i don't have any buffer leaks. this includes things like making sure all the buffers are properly freed when the world is left, which i don't remember how i'm doing at all. so there's still some bookkeeping to figure out.
(currently the game has a very minecraft-style rendering structure, by which i mean that there's a bunch of 'loaded chunks' that are rendered at full detail, and then a hard edge of totally invisible 'unloaded chunks'. one of the big things i'd like to add at a later point is some amount of LODs to push out the horizon super far, so that you can actually see, you know, distant hills or mountains. i'll have to improve load times for that, since that means generating even more world geometry, but... i can figure that out when i get to it.)
in the mean time, well, this was kind of my final excuse of things-i-really-want-to-get-done before i focus on content. so i guess may 1st-15th is gonna be returning to working on 'storage' + improved picking.
(getting world loading working means i can start thinking about worldgen for real, which is... interesting, but not really a huge priority, since i'm extremely aware of how worldgen can be an infinite timesink. higher up on the priority list is getting movement feeling better -- when i added in basic collision, i had the player just stop instantly when they hit a wall that was over a certain height. this leads to really jerky movement. something better would be to 1. convert some movement to move along the wall, so players don't 'stick' to walls, and 2. have a slightly higher level that you can 'climb over' by having the collision response be moving the player slightly upward. this would basically introduce a height level where you can instantly step up vs. a height level that you have to take a slower step up. later on i plan to have mantling, so you could grab a ledge above you and pull yourself up that way, which would add further vertical movement. all of that isn't really a huge priority, but they're definitely things i'll have to add at some point, because currently movement is really unpleasant.)
May 1st, 2021
well, maybe i spoke too soon. when i had tested the new loading code it was actually with a slightly different function that didn't do the full loading process, and when i changed it to actually do the full loading progress the buffer corruption issue showed up again. that lead me to really dig down into what could possibly be causing the issue, since i guess that entire background-loading coroutine change was unnecessary (although still a good idea, since it helped make the code a little less conceptually difficult to think about).
after doing a little more debugging, i turned up the immediate cause of the problem -- there were duplicate chunk entries in the map render cache, which meant when they were unrendered and their buffers dumped into the spare buffer list they ended up in there multiple times and thus were grabbed by several chunks -- but i still haven't pinned down the root cause of "what's actually putting these chunks in the render cache multiple times". still, that takes the problem out of the realm of a total mystery and into the realm of a known problem with several different paths to solution, so i'm calling that a win.
May 5th, 2021
after a few more days of not doing much, i restructured the buffer cache code to correctly de-duplicate and partition things.
one other issue i hadn't mentioned in the previous updates was that sometimes during the process of trying to fix things i'd occasionally get a crash with 'no such label as Tri 0 0' during chunk loading. ultimately the issue there was that when i allocated buffers originally, i was tying them to different 'labeling functions' -- the landscape buffers had labels for both hexes and tris, whereas the decoration buffers (just plants, currently) only had labels for hexes. they both had the same type, though, and the spare buffers were kept in what was essentially a heterogenous list like [landscapebuffer, plantbuffer, landscapebuffer, plantbuffer], etc. so if i ever tried to sort or otherwise restructure that list without making sure to always extract 'tuples', maybe a plant buffer got reused as a landscape buffer and caused that crash. that was a really dumb way to do things so i changed the spare buffer cache to track the two types of buffer separately.
anyway, problem solved! or at least, immediate cause worked around; i haven't actually checked to see why chunks end up in the render cache multiple times. that's probably worth checking out at some point.
(this does actually introduce a kind of funny visual issue: i'm reusing plant buffers, and i'm not clearing them out before writing to them again, and i'm drawing the whole buffer all the time, and all that together means that when a plant buffer is reused, plants on now-empty hexes still remain, and will be floating in the air or buried underground in the new chunk. this is one of those issues that's obviously an issue for a player but not really an issue for me as a developer, since it's not really causing any problems other than there being floating plants in some circumstances. i should probably fix it at some point, though.)
September 18th, 2021
back at it again after a prolonged break, for reasons that i might recap in a later post.
anyway today i started restructuring the main loop, and separated all the actual 'loop logic' (main function and loop + program termination code) out from the actual game logic (render and game data processing code). it's still a work in process, since i want to rewrite the event system, which might require totally rewriting every event handler everywhere, but it's a start.
September 28th, 2021
okay so i wanted to get back into working on this, and one of the things i wanted to do was remoev all the reactive-banana
code -- so, the entire FRP system. on further inspection i feel like FRP really isn't a good fit for handling interactive elements, and trying to shoehorn all the code into that shape was ultimately just adding a giant hassle of confusing types. i had actually put together another haskell project with a totally bespoke event handling system, which worked okay, so i started by trying to replicate some of that over in this project. the problem with that is that i never got around to reimplementing all my forms code in the other project, so i was immediately swamped under trying to recode that, & restructuring the other event handler code to even be capable of representing my forms code, & redoing all the super fiddly buffer writing and updating code that i had already done. all-in-all it was a really frustrating experience and i think i might just stop, or at least stop until i've reimplemented forms & the like in a more robust fashion elsewhere so i can more easily port it over to this project.
that does kind of leave me without a specific part of this project to actually work on, though.
at least i did do some useful work: i restructured the basic 'main loop' of the game, and extracted all the 'game stuff' out from the 'loop stuff', where previously they were incredibly tangled.
September 29th, 2021
well, i redid map generation somewhat. this is the 'starter graph', which will be expanded further on, but i might use just this as a testing ground for rendering a nice-looking worldmap, instead of this nodes-and-edges mess i have currently.
this is theoretically a rhobic face of a rhombic-dodecahedral world, so, 1/12th of the total surface area. the idea is one sharp end of the rhombus touches polar north, and the other sharp end of the rhombus touches the equator, with the resulting shape encompassing a 90-degree slice of longitude.
also there's a problem with graph expansion here, where in practice i'd want to treat edges as 'traversible terrain' -- there would be some kind of implicit boundary like a mountain range or an inhospitable desert or w/e between nearby-but-unconnected nodes -- which means that when i place down a new node, i'd want to look at all the other nodes that are within the 'graph face' it's been placed in, and optionally connect them. but that's not really an operation that my graph grammar can do -- it can only match on fixed, known-size subgraphs, not "any nodes in the same face". tbh this is making me wonder if i should start using my polyhedra data type as a graph type, since they're isomorphic data structures anyway.
October 11th, 2021
getting back to work on this again. the goal is to make a somewhat-pretty-looking worldmap, before i start to dig into the simulation aspects of settlements. today i started by trying to untangle some of the coordinate math for world generation -- get the generated world rhombus aligned with an actual chunk rhombus (which makes it easier to draw), align things so the 'north' nodes are actually facing north, that kind of thing. which, due to various coordinate weirdness, i ended up doing by literally running the game and seeing what direction the north star was in at night + which direction the sun rose and fell in, to determine the 'canonical' directions. one neat thing to test out is that deep in the shader math there's a certain hardcoded lat/long parameter, and now with the world rhombus specified there's enough information to calculate the lat/long for any given hex in the world, so now i could make that vary and have things like time zones or southern stars naturally arise. i'm gonna need to overhaul some parts of the skybox shader before it's really robust enough to handle all that, though.
anyway mostly today i just got the wrld rhombus rendering. the next step would be to have each hex sample the nearby nodes, and blend them together to get a more varied 'biome'. to start with i'll just be happy with elevation and roughness renders, but that's something for tomorrow.
October 12th, 2021
today i mostly went looking (again) for haskell delaunay/voronoi libraries. still no good, working ones. but also thinking about it i probably don't want precisely a delaunay triangulation for the world generator, so i can probably do something else that would have a similar effect.
October 13th, 2021
i started working on worldmap renders. the basic idea is this: a big 'worldmap graph' is generated, as above, and then each hex in the world rhombus samples all nodes within a certain distance, weighs them based on closeness, and blends the 'biome generation' for each node together to get some resulting hex information. then each hex is rendered to display the worldmap. at fist i'm just going for height values displayed.
also, initially i was doing the weighing wrong in several ways (things were literally weighed by distance, as in, the further away the higher their weight was).
currently the old graph render is drawn at height 0, while all the terrain is generated around heights 32-100. it is pretty funny to me how the worldmap just turns into a mess of text labels.
i drew some really bad textures just to add some visual variance. currently the generation parameters used are just height (which is averaged between biome influences) and material (which is copied from the most impactful biome). ideally i'd like biome to be a lot more varied than than -- the 'canyon cities' region, for example, is supposed to be a bunch of arroyos and canyons that eventually empty out into vast dry saltbeds, with a bunch of colorful striated rock and pastle blue/pink saltfields. right now it's just a sand region with a slightly lower height.
the big step i'd have to take there would be to change each region from having a fixed data point (this high, this material, etc) to having an actual generator function that generates the full biome, with all its internal variance. those could then be blended together. this might also involve doing some actual graph subdivision -- that's where more graph stuff would come in handy, since i could convert the singular 'canyon cities' node into a whole subgraph that has a coherent, connected (dry) waterflow system, so that all the riverbeds actually do empty out into a bunch of endorheic basins. the main issue with that is just that since i have no triangulation code i have no way to cut out an area for each subgraph, so i can't really place down starter templates for any of the regions.
also the worldgen was happening in a tiny region, so i added a scaling multiplier to stretch things out. this is set to 3x, which might be a good start. still a very very tiny region, considering this is supposed to be 1/12th of a planet's surface, but it should be big enough to generate some regional features.
i'm actually very hyped to maybe hook in that lat/long calculation to the worldmap, too -- if the celestial sphere visibly rotates as you move from the northern tip (north pole) to the southern tip (equator) that would be really neat.
(working on the sky stuff for this game has definitely made me see flat earth stuff in a new light -- like, yes it's absurd for a lot of reasons, but if you forget all that other stuff, still: how do you get a celestial sphere that works the way ours does without actually being on a sphere? you'd have to, what, fake every single photograph or exposure of the night sky in the entire world? given that most games are actually on a flat plane i've had to do a lot of special math to try to mimic a sphere, just because starfields literally would not work correctly on a non-spherical planet, since it turns out the axis of rotation and the northern/southern stars have a very particular shape that it's difficult to mimic without fully committing to a spherical shape.)
ideally i'd like to render the worldmap as an inflated rhombic dodecahedron, even if all the other faces are just flat wireframes or something, just so i could fully implement the spherical coordinate transforms for the skybox. it's the 'inflating' that's the problem though, since at the edge of the rhombus you'd have discontinuities in the map grid, even disregarding the curvature problems.
October 18th, 2021
and then after a few days of not really making progress with generation, i try something else.
the main issue with generation is that i have no easy way to partition off subregions. normally i'd do something like voronoi cells, but there isn't really a good library for that in haskell. there are a few other ways to partition space, but none that are really that effortless, so i tried a few things out and was stymied and kinda gave up for a bit.
one of the ways to partition space would require a graph that goes off the edges of the rhombus, so that it's possible to generate a dual graph and then use that to place subregions. but that would require a more robust graph framework, to handle something that gets closer to the topology of a rhombic dodecahedron. i wanted to try out rendering something like that, so i started thinking about curvature shaders.
this is theoretically very simple, since it's basically exactly what a vertex shader is for: using some information, transform all the scene's vertices somehow. wrapping things up in a sphere is a non-affine transformation, so it wouldn't be something that could just be done with matrices, but it's not fundamentally complicated.
anyway long story short i was able to write a curvature shader, but not one that was topologically close to what i'd need to render arbitrary rhombic sections stitched together in a net, on account of i was fundamentally not understanding some of the topological properties of the space i wanted until i started to actually write code.
also it looks pretty wild when attached to an actual landscape render and not just the worldmap
this was definitely a learning experience, but i'm gonna give up on that for now. maybe i'll start in on simulation stuff tomorrow, since i feel like that might be something i could actually make some headway in. we'll see.