Someone with my system of values cares about the future -- the indefinitely far future, to be exact. For example when I think about the future 30 years from now, my main concern is how many intelligent agents there will be then, what will be their capabilities and what will be their terminal values -- because those things will be the main determinant of how the future will evolve after that (after 30 years from now).
For me to survive for 30 years (keeping my intelligence, knowledge and terminal values intact) is one way for me to attend to this main concern -- one way for me to ensure the presence 30 years from now of an intelligent agent with certain values -- but it might not be the only way. If there is another, better way, it is my policy to choose it. my personal survival has no intrinsic value in my system: it is strictly a means to an end.
Now I have to qualify that last paragraph a little bit. Like everyone else, I inherited and introjected certain cognitive biases. It is likely that these cognitive biases make it impossible for me to view my personal survival objectively. Consequently, I might not be able to implement my policy in a perfectly rational way (because I am biased). But that is a second order effect: to the first order I expect my effect on reality to be determined by my explicit terminal values that I have described in these pages.
Let us adopt a simple model in which the an intelligent agent's effect on reality is determined essentially by the agent's intelligence, knowledge and terminal values.
Under this model, if I could perfectly transfer my knowledge and terminal values to someone 30 years younger than I who has the same level of intelligence and life expectancy as I, that would be as good -- where "good" is define according to the system of values described in these pages -- as extending my life by 30 years while keeping my intelligence, knowledge and terminal values intact.
What matter in my systems in the presence of intelligent agents and the intelligence, knowledge and terminal values of those agents. (This is what matters because intelligent agents are by far the most potent determinants of how reality will evolve.) The particular identity of those intelligent agents does not matter: it does not matter for example if one of those agents is me or someone with the same level of intelligence, life expectancy) and terminal values as I whom I taught what I know.
My point is that if teaching means transferring one's knowledge and terminal values to someone else, then one can view teaching as a form of life extension. And I tend to think that teaching is a more potent form of life extension than what is usually called life extension: if you have limited resources, it is probably more effective to invest those resources in teaching than in trying to extend your lifespan. That is, if you subscribe to my system of values.